Archive for the 'Security' Category

Aug 08 2013

Security Development Lifecycle Training

Last year we tried to outsource our classroom-based training (CBT) for our SDL effort to a third party…that didn’t go so well. I don’t want to mention the name of the company we used but we were disappointed enough with the first round of the training and we decided to go our own route.  To that end I was tasked with creating the content for the training…and I have been heads down most of the time this year working on several classes:

  • Threat Modeling
  • Secure Coding in C and C++
  • Secure Coding C#

The Threat Modeling class has been completed (although it could stand to be updated and cleaned up a bit).  The Secure Coding in C and C++ class was completed but the feedback I got from my second group of attendees was that they’re doing development on the ARM processor platform and they wanted to see the exploits in the hands-on lab exercises on that platform.  The Secure Coding in C# is still being built out.

The good news is that I have been able to get a Debian Linux image built (with a GUI interface) for the Secure Coding in C and C++ class using the QEMU ARM emulator.  The next step is to setup the networking so that I can pull additional packages into the image and build out a complete development environment.  This has been driving me crazy for the past couple of months because the installer for the image and the QEMU disk image were constantly giving me problems.  Today was a “Good” day…

 

No responses yet

May 26 2011

Software Development Lifecycle – Training

Lately I’ve been working on incorporating a Software Development Lifecycle (SDL) in the development processes of a smart grid vendor for their smart grid products.  It’s no secret that everyone (from the vendors to the utilities to the public utility commissions to NERC and FERC) are concerned about the rush to deploy smart grid or – more aptly – advanced metering infrastructure (AMI) systems.  There are many issues that need to be considered when doing an advanced metering infrastructure – internal security at the utility; securing the endpoint devices; the security of the connecting network.  All of these are things to rightly be concerned about.  However, very few smart grid vendors have focused on the builtin security of their software.  I’m not talking about all of the bells and whistles that they provide to secure the AMI infrastructure…I’m talking about the quality of their code.  It’s all well and good to have lots of security features that your customers can turn on and off…but what lurks under the engine?  Buffer overflows? Heap overflows? Cross-site scripting?  Cross-site request forgery? I could go on and on.  To deal with these concerns and potential vulnerabilities I’ve been working on implementing Microsoft’s Security Development Lifecycle (SDL) in our product development groups.  This has been a real challenge given that we previously didn’t worry about such issues since meters (electric, gas, and water) were isolated, electro-mechanical devices that didn’t have two-way (or in some cases even one-way) communication capabilities.  I plan to post updates with implementing an SDL in this blog in hopes that others learn from our experience.

One of the primary components of an SDL is a software security training program.  Developers and development management tend to focus on one thing primarily – writing code and getting it working as fast as possible.  In many cases security is not even an afterthought and even if it is given some consideration many developers don’t have the experience in writing code with security in mind.  This is where a software security training program is essential.  It needs to cover a wide variety of topics such as an overview of the SDL process, secure coding in C/C++/Java/.NET, threat modeling, and secure software architecture to name a few.  In today’s market there are two options in software security training for an organization that is looking to stand up an SDL:

  1. Do it yourself
  2. Outsource

From a “do it yourself” perspective one of the hardest parts is finding people skilled at secure coding within an organization that is already possibly behind the curve on software security.  All content would be developed internally – and there’s the Catch-22 situation: how can you develop the content when your staff doesn’t have the skills necessary to write the content which needs to be taught? In addition to that you will need to setup a learning management system (LMS) in order to track developers as they go through the training which is internally developed (or perhaps bought).

In many cases the only viable alternative is to outsource.  Outsourcing should leverage both instructor-led training (ILT) and online classes.  The only thing to decide is The question is which philosophy do you subscribe to with regards to training: ILT training first with online classes as a reinforcement/refresher or online classes first followed by ILT.  I’ll try and explain both approaches below:

Leveraging ILT before going proceeding to online training is based on the idea of getting the most, in-depth training upfront is the key component of the training and the online classes are just there for reinforcement of the material learned in the ILT classes.  In addition the online classes can be used as refresher classes after some specified period of time – say, approximately a year – after the initial ILT/online classes have been taken.  The trick is that the online class content needs to be updated during that time…otherwise it becomes stale and loses value for the developers.  The big benefit here is that you put a lot of effort upfront to get your developers trained and can leverage that training as soon as possible.

Flipping the sequence around has the online training occurring before the ILT classes.  The philosophy here is that the developers get a broad knowledge of the SDL and its various components and then you’re able to focus the ILT more effectively to provide the attendees a class that explores the content more completely.  One of the big benefits to this approach is that the developers get a broad education in what an SDL is and what steps are part of the overall process.  This allows you to provide some training to all of your developers (of course that depends on how many seats you buy for the e-learning system) and to take those who are key and provide them the ILT first.

It’s hard to say which is the better approach – too many factors to consider: cost and schedule being the primary ones.  It is my belief that both approaches are equally valid.  I would also stress that it depends on how big your developer population is and how quickly you need to get some training started.  From my own perspective I think the idea of starting the e-learning first and then moving to an ILT is more effective – it allows for your developers to all start at the same knowledge level before having them go through the ILT.  It also doesn’t prevent you from using the e-learning later as a refresher for the material that the learned in the ILT.  I’d be interested in hearing other’s thoughts as well on this.

2 responses so far

Mar 30 2011

What is NERC CIP Compliance?

One of the biggest challenges I find talking to customers about Smart Grid/AMI deployments is answering the question “Is your solution/architecture NERC CIP compliant?”  It’s somewhat frustrating since there’s no certification (which is typically the direction the question originates from) that I can point to – you know, something like a UL or Windows Hardware Quality Lab (WHQL) testing – that says the solution or architecture is NERC CIP compliant.  In many cases I have to spend a fair amount of time resetting customer expectations in what NERC CIP (and in many cases what the NIST IR 7628) really requires.

The NERC CIP is a subset of the NERC Reliability Standards and is comprised of a group of nine standards specifying security requirements utilities must meet.  The standards include the following:

Version 3 of the NERC CIP standards is currently in effect.  The above links actually point to the version 4 documents of the standard as they have passed the ballot vote although a mandatory effective date has not been set yet.

Each standard covers a different domain but the common feature among them is that the standards are focused more on process and policy versus actual technology.  Does that mean that they don’t touch the technology?  No…by no means.  In fact some of the requirements in specific CIP standards (specifically we’re talking about CIP-002, CIP-003, CIP-005, CIP-007 and CIP-009) can be met based on the technical capabilities in the Smart Grid/AMI system being deployed.  That being said, however, the standards do not specify specific technologies that are required.  For example, the PCI-DSS standard requires that organizations use firewalls and anti-virus as part of their technical controls.  That’s a clear case where the standard specifically states what technical controls an organization must use in order to be compliant. NERC CIP does not do that directly.

NERC CIP is written more broadly than standards like PCI-DSS.  In many cases the wording leaves it up to the utility or the industry to decide how the standard is applied and interpreted.  But as far as technical controls mapping directly – while there are ways to claim that an AMI or Smart Grid system meets specific requirements within the NERC CIP standards based on those technical controls, there is no way to simply say “yes, it is NERC CIP compliant.”

2 responses so far

Mar 24 2011

Smart Grid Security Vulnerabilities?

Published by under Security,Smart Grid

I’ve been working for Itron for the past 14 months of which the last 5 have been as the Security Engineering Team lead for the company.  I need to keep abreast of current security trends in terms of the Smart Grid industry (I’m not going to go into the discussion of Smart Grid vs. AMI at the moment) and every so often I come across some rather glaring mistakes in information that, if not corrected, can lead to significant, unnecessary concerns about the security of Smart Grid or AMI deployments.  Normally I’m not that picky about correcting such mistakes but this one, in my opinion, needed some response as opponents of Smart Grids could use this as part of their arguments against Smart Grid technology and deployments.

Case in point is Guido Bartels‘ “Combating Smart Grid Vulnerabilities” article in the March 2011 issue of the Journal of Energy Security.  On the whole, this article is spot on.  I think Mr. Bartels does an excellent job in laying out the case for the efforts being done to secure Smart Grid deployments by utilities and by vendors as well.  I only have one small issue with the article and that is the incorrect use of a graph titled “Number of New Smart Grid Vulnerabilities”.  This graph, developed by IBM‘s X-Force can also be found here.  This graph is actually a histogram of the number of new vulnerabilities identified by IBM’s X-Force Research and Development team over the period of 2000 to the first half of 2010.  Unfortunately it is incorrectly labeled in the article and I hope that the editors will do their readers a kind service by correcting the faulty title of the graph.

No responses yet

Aug 24 2009

ISSA Journal Article

Published by under architecture,Security

I wrote an article that was published in the ISSA Journal in August 2009. The topic of the post was “De-perimeterized Architectures” and focuses on the Jericho Forum‘s work on a next generation architecture that accommodates the fact that the network perimeter is becoming more porous and passing more and more traffic in newer protocols than ever before.

A direct link to the article is here. (Be aware that you need to be a member of ISSA and must login to the ISSA website to read the article).

No responses yet

Jun 09 2009

SSH Server on Windows Server 2008 Core

Published by under Security,Windows

I’ve been playing around (in my copious free time 😉 ) with other methods of connecting to and managing Server 2008 Core. One of the things I’ve wanted to do was to be able to SSH directly to Server 2008 Core and have the same command line capability as I do on the console. To that end I did a quick search for similar work and found the following article at TechRepublic about installing an SSH server in Windows 2008. The difference that I wanted to do was to install it in Server Core rather than the full-blown version of 2008.

Like David Davis over at TechRepublic I decided to start with FreeSSHd as my SSH server. The first thing I needed to do was to get it onto the Server Core VM. Rather than downloading it to my desktop and then transferring it to the Server Core VM I decided that would rather download it directly to the Server Core machine. In order to do that I needed wget that would run on Windows. I used the wget binary I downloaded (to my desktop) from Bart Puype in Belgium. Once I copied wget to C:\Windows\System32 I used it download the FreeSSHd.exe binary from FreeSSHd.com.

To install freesshd, just run the freesshd.exe program and it will start up the install wizard. A couple of items to note — on Server Core do not bother with creating a Start Menu item for FreeSSHd and don’t bother with creating a desktop icon either. One of the problems that I encountered when I installed FreeSSHd on Server Core was that I could not configure the SSH server since the task bar icon did not appear on the right (as should be the case since there is no task bar in Server 2008 Core). To configure FreeSSHd I had to edit the freesshdservice.ini file in the C:\Program Files\freesshd directory (the default location for the installation).

A small point to note. Server 2008 Core’s firewall is on by default (even if it’s a domain joined machine) and the policy is to block all inbound connection attempts but to allow outbound connections. After installing FreeSSHd I needed to modify the firewall and decided to use netsh to do so. The command I used was

netsh advfirewall firewall add rule name="SSHd" dir=in action=allow protocol=TCP localport=22

Very simple…I love netsh 🙂
Another problem I ran into was getting the NT authentication to work. I did manage to get the password authentication working but I wanted to tie the FreeSSHd server into the Windows authentication. I’m still not 100% sure as to where the problem lies with the NT authentication integration and will investigate it further.

One of the biggest drawbacks to FreeSSHd is that there is very little (re: almost none) documentation that covers the freesshdservice.ini file. You need to read the forums over at freesshd.com in order to get a sense of what the settings are for the file and what specific changes to the file cause in the overall operation of the server. I hope to get that put together and posted here this summer as I think others will find it useful.

To get the password authentication working I installed FreeSSHd on a Windows Server 2003 system and then created the users I wanted there and copied over the relevant portions of the freesshdservice.ini file to the one on the Server 2008 Core VM. Then, to restart the service I would just issue the commands: net stop freesshdservice and net start freesshdservice and I was good to go. As you can see from the last capture in the gallery below I was able to connect to the server and log in using the account I had created on the Server 2003 system and copied over to the freesshdservice.ini file on the Server 2008 Core VM.

In the future I’m going to try some of the other freely available SSH servers and see if they provide an easier integration into Server 2008 Core.

One response so far