Vía Schneier he dado con un documento en el que se citan algunas divertidas máximas sobre la seguridad de la información.
Aunque redactadas en un tono algo cómico, no dejan de ser verdades como puños en las que muchas veces no reparamos o a las que no damos la suficiente importancia. Explicadas en un lenguaje que combina el humor con la claridad, resultan demoledoras. Os las transcribo:
Infinity Maxim: There are an unlimited number of security vulnerabilities for a given security device, system, or program, most of which will never be discovered (by the good guys or bad guys).
Arrogance Maxim: The ease of defeating a security device or system is proportional to how confident/arrogant the designer, manufacturer, or user is about it, and to how often they use words like “impossible” or “tamper-proof”.
Ignorance is Bliss Maxim: The confidence that people have in security is inversely proportional to how much they know about it.
Be Afraid, Be Very Afraid Maxim: If you’re not running scared, you have bad security or a bad security product.
High-Tech Maxim: The amount of careful thinking that has gone into a given security device, system, or program is inversely proportional to the amount of high-technology it uses.
Schneier’s Maxim #1: The more excited people are about a given security technology, the less they understand (1) that technology and (2) their own security problems.
Low-Tech Maxim: Low-tech attacks work (even against high-tech devices and systems).
Father Knows Best Maxim: The amount that (non-security) senior managers in any organization know about security is inversely proportional to (1) how easy they think security is, and (2) how much they will micro-manage security and invent arbitrary rules.
Huh Maxim: When a (non-security) senior manager, bureaucrat, or government official talks publicly about security, he or she will usually say something stupid, unrealistic, inaccurate, and/or naïve.
Voltaire’s Maxim: The problem with common sense is that it is not all that common.
Yipee Maxim: There are effective, simple, & low-cost counter-measures (at least partial countermeasures) to most vulnerabilities.
Arg Maxim: But users, manufacturers, managers, & bureaucrats will be reluctant to implement them for reasons of inertia, pride, bureaucracy, fear, wishful thinking, and/or cognitive dissonance.
Show Me Maxim: No serious security vulnerability, including blatantly obvious ones, will be dealt with until there is overwhelming evidence and widespread recognition that adversaries have already catastrophically exploited it. In other words, “significant psychological (or literal) damage is required before any significant security changes will be made”.
I Just Work Here Maxim: No salesperson, engineer, or executive of a company that sells security products or services is prepared to answer a significant question about vulnerabilities, and few potential customers will ever ask them one.
Bob Knows a Guy Maxim: Most security products and services will be chosen by the end-user based on purchase price plus hype, rumor, innuendo, hearsay, and gossip.
Familiarity Maxim: Any security technology becomes more vulnerable to attacks when it becomes more widely used, and when it has been used for a longer period of time.
Antique Maxim: A security device, system, or program is most vulnerable near the end of its life.
Payoff Maxim: The more money that can be made from defeating a technology, the more attacks, attackers, and hackers will appear.
I Hate You Maxim 1: The more a given technology is despised or distrusted, the more attacks, attackers, and hackers will appear.
I Hate You Maxim 2: The more a given technology causes hassles or annoys security personnel, the less effective it will be.
Shannon’s (Kerckhoffs’) Maxim: The adversaries know and understand the security hardware and strategies being employed.
Corollary to Shannon’s Maxim: Thus, “Security by Obscurity”, i.e., security based on keeping long-term secrets, is not a good idea.
Gossip Maxim: People and organizations can’t keep secrets.
Plug into the Formula Maxim: Engineers don’t understand security. They think nature is the adversary, not people. They tend to work in solution space, not problem space. They think systems fail stochastically, not through deliberate, intelligent, malicious intent.
Resulta muy difícil destacar alguna, pero por destacar, me quedo con:
Backwards Maxim: Most people will assume everything is secure until provided strong evidence to the contrary–exactly backwards from a reasonable approach. (la mayoría de la gente asumirá que todo es seguro hasta que se les muestren evidencias significativas de lo contrario, justo lo contrario de lo que sugiere una aproximación razonable)
El resto de las máximas, en el documento original. El texto es obra de Roger G. Johnston, del Vulnerability Assessment Team del Argonne National Laboratory
Un saludo para todos, y que tengáis una buena semana :)
Tras un vistazo del PPT (genial), me quedo con el «Backwards Maxim»… y veo ahora que es justamente el que has destacado tú :-)
Realmente son verdades como puños, que pueden aplicarse más allá de lo puramente tecnológico. Por ejemplo, SOX es la gran evidencia de la Backward Maxim, después de Enron y WorldCom. Y, con la que está cayendo, seguro que habrá un SOX II. Lamentablemente, a los SOX, Basilea, etc. siempre puede aplicárseles la Arrogance Maxim. Una pena…
Saludos
Jaaaarl, puse un post de esto en el Blog de Inteco,jajajaja
jcbarreto,
Es que esa máxima es para enmarcarla :)
Manu,
Parece que SOX necesita una revisión, sí. Se pueden aplicar tantas máximas que yo particularmente, he perdido la cuenta. Está claro que hay que aumentar drásticamente las exigencias regulatorias a lo largo y ancho del planeta.
David,
Sí, lo vi poco después de escribir este post. Lo siento :)
Un saludo para ambos :)