Archive for the 'architecture' Category

Aug 08 2013

Security Development Lifecycle Training

Last year we tried to outsource our classroom-based training (CBT) for our SDL effort to a third party…that didn’t go so well. I don’t want to mention the name of the company we used but we were disappointed enough with the first round of the training and we decided to go our own route.  To that end I was tasked with creating the content for the training…and I have been heads down most of the time this year working on several classes:

  • Threat Modeling
  • Secure Coding in C and C++
  • Secure Coding C#

The Threat Modeling class has been completed (although it could stand to be updated and cleaned up a bit).  The Secure Coding in C and C++ class was completed but the feedback I got from my second group of attendees was that they’re doing development on the ARM processor platform and they wanted to see the exploits in the hands-on lab exercises on that platform.  The Secure Coding in C# is still being built out.

The good news is that I have been able to get a Debian Linux image built (with a GUI interface) for the Secure Coding in C and C++ class using the QEMU ARM emulator.  The next step is to setup the networking so that I can pull additional packages into the image and build out a complete development environment.  This has been driving me crazy for the past couple of months because the installer for the image and the QEMU disk image were constantly giving me problems.  Today was a “Good” day…


No responses yet

May 26 2011

Software Development Lifecycle – Training

Lately I’ve been working on incorporating a Software Development Lifecycle (SDL) in the development processes of a smart grid vendor for their smart grid products.  It’s no secret that everyone (from the vendors to the utilities to the public utility commissions to NERC and FERC) are concerned about the rush to deploy smart grid or – more aptly – advanced metering infrastructure (AMI) systems.  There are many issues that need to be considered when doing an advanced metering infrastructure – internal security at the utility; securing the endpoint devices; the security of the connecting network.  All of these are things to rightly be concerned about.  However, very few smart grid vendors have focused on the builtin security of their software.  I’m not talking about all of the bells and whistles that they provide to secure the AMI infrastructure…I’m talking about the quality of their code.  It’s all well and good to have lots of security features that your customers can turn on and off…but what lurks under the engine?  Buffer overflows? Heap overflows? Cross-site scripting?  Cross-site request forgery? I could go on and on.  To deal with these concerns and potential vulnerabilities I’ve been working on implementing Microsoft’s Security Development Lifecycle (SDL) in our product development groups.  This has been a real challenge given that we previously didn’t worry about such issues since meters (electric, gas, and water) were isolated, electro-mechanical devices that didn’t have two-way (or in some cases even one-way) communication capabilities.  I plan to post updates with implementing an SDL in this blog in hopes that others learn from our experience.

One of the primary components of an SDL is a software security training program.  Developers and development management tend to focus on one thing primarily – writing code and getting it working as fast as possible.  In many cases security is not even an afterthought and even if it is given some consideration many developers don’t have the experience in writing code with security in mind.  This is where a software security training program is essential.  It needs to cover a wide variety of topics such as an overview of the SDL process, secure coding in C/C++/Java/.NET, threat modeling, and secure software architecture to name a few.  In today’s market there are two options in software security training for an organization that is looking to stand up an SDL:

  1. Do it yourself
  2. Outsource

From a “do it yourself” perspective one of the hardest parts is finding people skilled at secure coding within an organization that is already possibly behind the curve on software security.  All content would be developed internally – and there’s the Catch-22 situation: how can you develop the content when your staff doesn’t have the skills necessary to write the content which needs to be taught? In addition to that you will need to setup a learning management system (LMS) in order to track developers as they go through the training which is internally developed (or perhaps bought).

In many cases the only viable alternative is to outsource.  Outsourcing should leverage both instructor-led training (ILT) and online classes.  The only thing to decide is The question is which philosophy do you subscribe to with regards to training: ILT training first with online classes as a reinforcement/refresher or online classes first followed by ILT.  I’ll try and explain both approaches below:

Leveraging ILT before going proceeding to online training is based on the idea of getting the most, in-depth training upfront is the key component of the training and the online classes are just there for reinforcement of the material learned in the ILT classes.  In addition the online classes can be used as refresher classes after some specified period of time – say, approximately a year – after the initial ILT/online classes have been taken.  The trick is that the online class content needs to be updated during that time…otherwise it becomes stale and loses value for the developers.  The big benefit here is that you put a lot of effort upfront to get your developers trained and can leverage that training as soon as possible.

Flipping the sequence around has the online training occurring before the ILT classes.  The philosophy here is that the developers get a broad knowledge of the SDL and its various components and then you’re able to focus the ILT more effectively to provide the attendees a class that explores the content more completely.  One of the big benefits to this approach is that the developers get a broad education in what an SDL is and what steps are part of the overall process.  This allows you to provide some training to all of your developers (of course that depends on how many seats you buy for the e-learning system) and to take those who are key and provide them the ILT first.

It’s hard to say which is the better approach – too many factors to consider: cost and schedule being the primary ones.  It is my belief that both approaches are equally valid.  I would also stress that it depends on how big your developer population is and how quickly you need to get some training started.  From my own perspective I think the idea of starting the e-learning first and then moving to an ILT is more effective – it allows for your developers to all start at the same knowledge level before having them go through the ILT.  It also doesn’t prevent you from using the e-learning later as a refresher for the material that the learned in the ILT.  I’d be interested in hearing other’s thoughts as well on this.

2 responses so far

Dec 17 2009

ESXi Struggles

I’ve finally built my new virtual server — the one in which I’m going to consolidate my current machines as virtual machines. The intention is to measure the amount of energy my current systems (consisting of a Sun Ultra 60 — dual 400MHz UltraSPARC II CPUs, 2GB memory, and 2x20GB SCSI drives; a Dell Workstation 610 — dual 700MHz Pentium III CPUs, 768MB memory, and a 20GB IDE drive as well as a 30GB IDE drive; a home-built server with an AMD Athlon 1.2GHz CPU, 512MB memory and a 30GB IDE drive as well as a 9.1GB SCSI drive) and then migrate everything to the virtual machine world and measure the energy used by the VM server.

The VM server consists of the following hardware:

Seasonic SS-500ES 500W power supply
Gigabyte GA-MA790GPT-UD3H
AMD Athlon x64 X4 CPU 630 Propus 2.8GHz (quad-core, 95W)
8GB memory
1 x 160GB 7200RPM SATA drive
1 x 500GB 7200RPM SATA drive

The first idea was to install Windows Server 2008R2 core with Hyper-V on the machine and use that to build the VM images. However, the Athlon x64 X4 CPU is not supported by Hyper-V. So far as I can tell it does use AMD-V technology and I did enable virtualization in the motherboard BIOS but Hyper-V won’t start. So, the fall back was to go with ESXi.

I wanted to use ESXi 4.0 Update 1 however the network interface on the motherboard (consisting of a Realtek 8111/8168 chip) is not supported by ESXi. The only supported network interfaces are gigabit interfaces (which the Realtek is — it just isn’t supported and I didn’t have a supported interface card on hand). So, I figured no problem…I’ll just use ESXi 3.5 Update 4. Well, the Realtek chip is also not supported in ESXi 3.5 — but the PCI 3Com 905TX and an Intel EE Pro 10/100 are. However, SATA drives are not supported — well, not completely. I managed to get the system installed by booting from the CD, switching to the tech support console (hitting ALT-F1) and then logging in using the unsupported login. I then loaded the AHCI driver and restarted the install and ESXi installed nicely. However, booting off the install on the hard drives was a no-go since the AHCI drive wouldn’t load (for reasons I’m not sure of) and the system crashed. Back to square one.

I then noticed that VMware released Update 5 to ESXi 3.5 earlier this month (about two weeks ago). I read the release notes and realized that they had resolved the AHCI/SATA drive issues. I downloaded it, burned it to a CD and tried it. Bingo! It installed without a hitch and booted without a problem. Awesome. Now I’m in the process of building out my VM images.

No responses yet

Aug 24 2009

ISSA Journal Article

Published by under architecture,Security

I wrote an article that was published in the ISSA Journal in August 2009. The topic of the post was “De-perimeterized Architectures” and focuses on the Jericho Forum‘s work on a next generation architecture that accommodates the fact that the network perimeter is becoming more porous and passing more and more traffic in newer protocols than ever before.

A direct link to the article is here. (Be aware that you need to be a member of ISSA and must login to the ISSA website to read the article).

No responses yet