Spelt and pronounced Stan–groome. Readify Senior Consultant, Technical Specialist (ALM). Release Engineer. PowerShell enthusiast.
22 stories
·
0 followers

Human-Machine Trust Failures

1 Comment and 4 Shares

I jacked a visitor's badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they're enabled when you check in at building security. You're supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn't let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB's security -- I'm sure the badge was quickly deactivated and showed up in some missing-badge log next to my name -- but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn't adequately explain it to the guards. The guards knew it but didn't know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state -- how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn't happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time -- that requires understanding on both parts. Each time things go wrong, and the machine portion doesn't communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system -- especially the machine part -- better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We've all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: "The computer is always right."

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they're easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they're bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That's why "if you see something, say something" fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That's the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Read the whole story
jstangroome
4108 days ago
reply
Sydney, Australia
Share this story
Delete
1 public comment
dbentley
4113 days ago
reply
Great story with an impressive moral.

Our Newfound Fear of Risk

5 Comments and 23 Shares

We're afraid of risk. It's a normal part of life, but we're increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren't free. They cost money, of course, but they cost other things as well. They often don't provide the security they advertise, and -- paradoxically -- they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

Three examples:

  1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it's not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes -- a wrong address on a warrant, a misunderstanding -- result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.

  2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.

  3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade -- including the wars in Iraq and Afghanistan -- money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

There's a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world -- among the wealthier of us who live in peaceful Western countries -- our lives have become safer.

Our notions of risk are not absolute; they're based more on how far they are from whatever we think of as "normal." So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We're bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are -- and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn't approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn't discipline -- no matter how benign the infraction -- the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There's a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they're arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved -- medicine is the big one, but other sciences as well -- we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn't able to figure out how to topple structures constructed under some new and safer building code, and an automobile won't invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you're safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you'd expect -- and you also get more side effects, because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

This essay previously appeared on Forbes.com.

Read the whole story
jstangroome
4115 days ago
reply
Sydney, Australia
Share this story
Delete
5 public comments
silberbaer
4114 days ago
reply
Yes. This.
New Baltimore, MI
emdeesee
4114 days ago
reply
Money quote: "We're bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones."
Sherman, TX
stevenewey
4115 days ago
reply
I thought provoking essay I for the most part agree with.

However, I was a little uncomfortable with the statement "In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes." with no citation for this 'fact' and it's vague assertions.

I found this article (http://www.mpg.de/6347636/terrorism_traffic-accidents-USA) which suggest within 12 months there was indeed more road deaths, to the tune of approx 1,600. Certainly fewer than died in the attacks themselves.

Of course the effect will likely have lasted longer than 12 months, but I would like to see data to back up the assertions.
Bedfordshire
ramsesoriginal
4115 days ago
reply
Great commentary on the topic of risk assessment
Bozen/Val Gardena Italy
adamgurri
4116 days ago
reply
Yes, a thousand times yes.
New York, NY

Test Harness for NuGet install PowerShell scripts (init.ps1, install.ps1, uninstall.ps1)

1 Share

One thing that I find frustrating when creating NuGet packages is the debug experience when it comes to creating the PowerShell install scripts (init.ps1, install.ps1, uninstall.ps1).

In order to make it easier to do the debugging I’ve created a test harness Visual Studio solution that allows you to make changes to a file, compile the solution, run a single command in the package manager and then have the package uninstall and then install again. That way you can change a line of code, do a few key strokes and then see the result straight away.

To see the code you can head to the GitHub repository. The basic instructions are on the readme:

  1. [Once off] Checkout the code
  2. [Once off] Create a NuGet source directory in checkout directory
  3. Repeat in a loop:
    1. Write the code (the structure of the solution is the structure of your nuget package, so put the appropriately named .ps1 scripts in the tools folder)
    2. Compile the solution (this creates a nuget package in the root of the solution with the name Package and version {yymm}.{ddHH}.{mmss}.nupkg – this means that the package version will increase with time so if you install from that directory it will always install the latest build) <F6> or <Ctrl+Shift+B)
    3. Switch to the Package Manager Console <Ctrl+P, Ctrl+M>
    4. [First time] Uninstall-Package Package; Install-Package Package <enter> / [Subsequent times] <Up arrow> <enter>
  4. When done simply copy the relevant files out and reset master to get a clean slate

Other handy hints

Read the whole story
jstangroome
4140 days ago
reply
Sydney, Australia
Share this story
Delete

Using SSH to Access Linux Servers in PowerShell

1 Share

A question I’ve fielded now and again in the past, “Can I use PowerShell to access Linux servers?”. Among others, there were a few answers I could give of varying degrees of usefulness depending on the requirements:

I was recently asked this again at my current workplace and discovered a project I hadn’t seen previously, a PowerShell module based on the SSH.NET library.

Once you have downloaded and imported the module, check out what is available:


Get-Command -Module SSH-Sessions

SSH1

To work with a Linux server, first of all you need to establish a session to the server with New-SshSession (I think this cmdlet would benefit from a Credential parameter):


New-SshSession -ComputerName PuppetVM -Username root -Password puppet

SSH2

Examine our connected sessions:


Get-SshSession | Format-Table -AutoSize

SSH3

 

It’s possible to now enter an interactive session with this VM and run some commands, for example to look at the OS and disk space:


Enter-SshSession -ComputerName PuppetVM

cat /proc/version

df -h

exit

SSH4

 

Similar to Invoke-Command in Windows you can use Invoke-SshCommand to send commands to an established session and receive results back (Note: you can use the -Quiet parameter if you don’t wish to see the display on screen):


$Query = Invoke-SshCommand -ComputerName PuppetVM -Command "ls /root"

SSH5

We can now work with these results from the $Query variable. What we get back looks like multiple strings, but is actually an object with a (long) single string


$Query.GetType()

$Query.Count

$Query[0].GetType()

SSH6

However, we can work with these a bit easier, should we need to, by breaking them up into individual strings:


$Strings = $Query[0] -split "`n"

$Strings

$Strings.Count

SSH7a

So far I’ve found this module pretty useful. There are few drawbacks I’ve found so far, including some limitations with ESXi 5.0 and above which are mentioned on the web page, but I hope this project will continue to be updated further.

Read the whole story
jstangroome
4140 days ago
reply
Sydney, Australia
Share this story
Delete

Lavabit E-Mail Service Shut Down

1 Comment and 5 Shares

Lavabit, the more-secure e-mail service that Edward Snowden -- among others -- used, has abruptly shut down. From the message on their homepage:

I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision. I cannot....

This experience has taught me one very important lesson: without congressional action or a strong judicial precedent, I would strongly recommend against anyone trusting their private data to a company with physical ties to the United States.

In case something happens to the homepage, the full message is recorded here.

More about the public/private surveillance partnership. And another news article.

Also yesterday, Silent Circle shut down its email service:

We see the writing the wall, and we have decided that it is best for us to shut down Silent Mail now. We have not received subpoenas, warrants, security letters, or anything else by any government, and this is why we are acting now.

More news stories.

This illustrates the difference between a business owned by a person, and a public corporation owned by shareholders. Ladar Levison can decide to shutter Lavabit -- a move that will personally cost him money -- because he believes it's the right thing to do. I applaud that decision, but it's one he's only able to make because he doesn't have to answer to public shareholders. Could you imagine what would happen if Mark Zuckerberg or Larry Page decided to shut down Facebook or Google rather than answer National Security Letters? They couldn't. They would be fired.

When the small companies can no longer operate, it's another step in the consolidation of the surveillance society.

Read the whole story
jstangroome
4140 days ago
reply
Sydney, Australia
Share this story
Delete
1 public comment
joeythesaint
4136 days ago
reply
"I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit." This is utter bull. Running a good, privacy-focused service shouldn't make you feel like a criminal.
Ottawa, Ontario

Risk Management Theatre: On Show At An Organization Near You

1 Share

One of the concepts that will feature in the new book I am working on is “risk management theatre”. This is the name I coined for the commonly-encountered control apparatus, imposed in a top-down way, which makes life painful for the innocent but can be circumvented by the guilty (the name comes by analogy with security theatre.) Risk management theatre is the outcome of optimizing processes for the case that somebody will do something stupid or bad, because (to quote Bjarte Bogsnes talking about management), “there might be someone who who cannot be trusted. The strategy seems to be preventative control on everybody instead of damage control on those few.”

Unfortunately risk management theatre is everywhere in large organizations, and reflects the continuing dominance of the Theory X management paradigm. The alternative to the top-down control approach is what I have called adaptive risk management, informed by human-centred management theories (for example the work of Ohno, Deming, Drucker, Denning and Dweck) and the study of how complex systems behave, particularly when they drift into failure. Adaptive risk management is based on systems thinking, transparency, experimentation, and fast feedback loops.

Here are some examples of the differences between the two approaches.

Adaptive risk management (people work to detect problems through improving transparency and feedback, and solve them through improvisation and experimentation) Risk management theatre (management imposes controls and processes which make life painful for the innocent but can be circumvented by the guilty)
Continuous code review in which engineers ask a colleague to look over their changes before check-in, technical leads review all check-ins made by their team, and code review tools allow people to comment on each others’ work once it is in trunk. Mandatory code review enforced by check-in gates where a tool requires changes to be signed off by somebody else before they can be merged into trunk. This is inefficient and delays feedback on non-trivial regressions (including performance regressions).
Fast, automated unit and acceptance tests which inform engineers within minutes (for unit tests) or tens of minutes (for acceptance tests) if they have introduced a known regression into trunk, and which can be run on workstations before commit. Manual testing as a precondition for integration, especially when performed by a different team or in a different location. Like mandatory code review, this delays feedback on the effect of the change on the system as a whole.
A deployment pipeline which provides complete traceability of all changes from check-in to release, and which detects and rejects risky changes automatically through a combination of automated tests and manual validations. A comprehensive documentation trail so that in the event of a failure we can discover the human error that is the root cause of failures in the mechanistic, Cartesian paradigm that applies in the domain of systems that are not complex.
Situational awareness created through tools which make it easy to monitor, analyze and correlate relevant data. This includes process, business and systems level metrics as well as the discussion threads around events. Segregation of duties which acts as a barrier to knowledge sharing, feedback and collaboration, and reduces the situational awareness which is essential to an effective response in the event of an incident.

It’s important to emphasize that there are circumstances in which the countermeasures on the right are appropriate. If your delivery and operational processes are chaotic and undisciplined, imposing controls can be an effective way to improve – so long as we understand they are a temporary countermeasure rather than an end in themselves, and provided they are applied with the consent of the people who must work within them.

Here are some differences between the two approaches in the field of IT:

Adaptive risk management (people work to detect problems through improving transparency and feedback, and solve them through improvisation and experimentation) Risk management theatre (management imposes controls and processes which make life painful for the innocent but can be circumvented by the guilty)
Principle-based and dynamic: principles can be applied to situations that were not envisaged when the principles were created. Rule-based and static: when we encounter new technologies and processes (for example, cloud computing) we need to rewrite the rules.
Uses transparency to prevent accidents and bad behaviour. When it’s easy for anybody to see what anybody else is doing, people are more careful. As Louis Brandeis said, “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.” Uses controls to prevent accidents and bad behaviour. This approach is the default for legislators as a way to prove they have taken action in response to a disaster. But controls limit our ability to adapt quickly to unexpected problems. This introduces a new class of risks, for example over-reliance on emergency change processes because the standard change process is too slow and bureaucratic.
Accepts that systems drift into failure. Our systems and the environment are constantly changing, and there will never be sufficient information to make globally rational decisions. Humans solve our problems and we must rely on them to make judgement calls. Assumes humans are the problem. If people always follow the processes correctly, nothing bad can happen. Controls are put in place to manage “bad apples”. Ignores the fact that process specifications always require interpretation and adaptation in reality.
Rewards people for collaboration, experimentation, and system-level improvements. People collaborate to improve system-level metrics such as lead time and time to restore service. No rewards for “productivity” on individual or function level. Accepts that locally rational decisions can lead to system level failures. Rewards people based on personal “productivity” and local optimization. For example operations people optimizing for stability at the expense of throughput, or developers optimizing for velocity at the expense of quality (even though these are false dichotomies.)
Creates a culture of continuous learning and experimentation: People openly discuss mistakes to learn from them and conduct post-mortems after outages or customer service problems with the goal of improving the system. People are encouraged to try things out and experiment (with the expectations that many hypotheses will be invalidated) in order to get better. Creates a culture of fear and mistrust. Encourages finger pointing and lack of ownership for errors, omissions and failure to get things done. As in: If I don’t do anything unless someone tells me to, I won’t be held responsible for any resulting failure.
Failures are a learning opportunity. They occur in controlled circumstances, their effects are appropriately mitigated, and they are encouraged as an opportunity to learn how to improve. Failures are caused by human error (usually a failure to follow some process correctly), and the primary response is to find the person responsible and punish them, and then use further controls and processes as the main strategy to prevent future problems.

Risk management theatre is not just painful and a barrier to the adoption of continuous delivery (and indeed to continuous improvement in general). It is actually dangerous, primarily because it creates a culture of fear and mistrust. As Bogsnes says, “if the entire management model reeks of mistrust and control mechanisms against unwanted behavior, the result might actually be more, not less, of what we try to prevent. The more people are treated as criminals, the more we risk that they will behave as such.”

This kind of organizational culture is a major factor whenever we see people who are scared of losing their jobs, or engage in activities designed to protect themselves in the case that something goes wrong, or attempt to make themselves indispensable through hoarding information.

I’m certainly not suggesting that controls, IT governance frameworks, and oversight are bad in and of themselves. Indeed, applied correctly, they are essential for effective risk management. ITIL for example allows for a lightweight change management process that is completely compatible with an adaptive approach to risk management. What’s decisive is how these framework are implemented. The way such frameworks are used and applied is determined by—and perpetuates—organizational culture.

Read the whole story
jstangroome
4144 days ago
reply
Sydney, Australia
Share this story
Delete
Next Page of Stories