Content

Got something to say?

Mac versus PC

In April’s Update ["2 minutes on...,"] you quoted Mr. Mikko Hypponen, chief research officer of F-Secure: "Before the first PC virus appeared in 1986, the whole computer virus problem was largely seen as a Mac-only problem."

This is a great example of a factoid. Factoid can refer to a spurious (unverified, incorrect or invented) "fact" intended to create or prolong public exposure or to manipulate public opinion.

I would refer you and your readers to a paper published by ADAPSO in 1989 entitled "Computer Viruses: Dealing with Electronic Vandalism and Programmed Threats," by Gene Spafford, Kathleen Heaphy and David Ferbrache. The paper appeared as well in Peter Denning’s anthology Computers under Attack: Intruders, Worms and Viruses.

The paper discusses three early Apple II viruses prior to 1986 which had limited infection rates.

More importantly, the paper notes that during the January 1986 through August 1, 1989, the number of known viruses had reached 21 IBM PC viruses with 57 minor variants, compared to three Apple II and 12 Apple Mac viruses.

If one investigates further to look at infection rates and damage affected by the PC versus Mac viruses, one quickly realizes that the impact on Mac platforms was essentially negligible compared to the PC world.

Chris Mc Donald, Las Cruces, N.M.

Who to trust

After reading the article "Defining Trust" by Dan Kaplan [April 2006], I really like the idea of Object Management Group (OMG)’s Architecture-Driven Modernization Special Interest Group (ADMSIG) initiative "to create a [security] framework (i.e., a standard) that would calculate risk and detail the properties and components constituting trustworthy software."

However, the article goes on to indicate that the OMG’s future framework will be "expressed algorithmically in terms of a meta-model and that it [will] also include the suppliers’ claims (i.e., an assurance arguments) and evidences about their products, determined through dynamic and static testing."

Therefore, I believe that any future framework or standard should also take these "assurance argument(s)" and prove three things:

1. The framework’s protection mechanisms are correct (in other words, they’re not full of bugs, and they enforce the stated security policy).

2. The framework always uses its protection mechanisms when they’re needed (it always checks access control whenever a user asks for access to a protected resource).

3. There’s no way to circumvent the framework’s protection mechanisms (so the framework doesn’t have any "backdoors" which might let people do end-runs around the protection mechanisms).

Winning any assurance argument will also involve demonstrating that any of the software artifact(s) from the standard or framework be shown to be designed, built, tested, delivered, installed and configured properly, and that it is also being operated properly as well. Further, some assurance experts may even desire formal mathematical proofs as well.

But, my real bottom line questions for any standard or framework or model like OMG’s is: What organization will perform the necessary security assurance evaluation, certification and accreditation for security vendor community? Will assurance be done by the U.S. government’s NIST/NIAP processes or will it be the vendor’s?

Finally, I also believe that now is the time for true information security experts and scientists — like SRI International’s Dr. John Rushby, program director for Formal Methods and Dependable Systems, who specializes in the application of these methods to security, and his coworker, Dr. Peter G. Neumann, principal scientist in SRI International's Computer Science Laboratory, which is concerned with computer systems, networks, security, reliability — to step forward to answer these assurance questions in their research and methodologies for the security community at large so the cyberworld will be more "trusted."

Gene Jarboe, Severna Park, Md.

Praise from the feds

I just came across your editorial ["IT security pros get well-deserved attention"] from the February issue. It is OUTSTANDING and was FUN to read. I have been in the federal cybersecurity arena within the federal space for 20 plus years and we really do need more Jack Stanfields. Sounds funny... but true!

Geoffrey G. Stilley, VP of federal sales and marketing, Cryptek Corp.

Status quo is failing

Part of the problem is not "while innovative thinking and solution development may have hit a speed bump," ["Securing IT is innovative thinking," January 2006] but that most security people would not know innovation if it bit them in the derriere.

The status quo is failing, why not just admit it. Why is it failing? Because as long as security people seek out solutions that are based on flawed IT architectures, they are doomed to failure.

Rob Lewis, via email

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.