Feb 11 2018

Made in America: An Informal History of the English Language in the United State by Bill Bryson

Published by under Books,Reviews

Made in America: An Informal History of the English Language in the United StatesMade in America: An Informal History of the English Language in the United States by Bill Bryson
My rating: 4 of 5 stars

This was a wonderful book. I’ve been reading Bill Bryson’s books for some time now and this one really does rank up there as one of his best. He traces a lot of words and idioms that we use now to their origins in the English spoken in England and that originally spoken in the America both before and right after the revolution. It’s rather amazing how words have changed meaning and context over the span of several hundred years. Like all of his previous works I enjoyed this most during the middle parts of the book and towards the end I was just a little bit tired – but this one didn’t weigh as much towards the end and lose it’s focus as some of his other books. I highly recommend it (even if it’s 24 years old now and just a bit out of date with many of the more current terms used in the English language of today).

View all my reviews

No responses yet

Jan 09 2018

Book Review: The Case Against Sugar by Gary Taubes

Published by under Books,Reviews

The Case Against SugarThe Case Against Sugar by Gary Taubes
My rating: 5 of 5 stars

An excellent book that makes a strong case for the impact that sugar has had on the health of Americans and other cultures and societies worldwide. While I understand the frustration of many readers with the issue that Gary uses older source materials in the first part of the book this, in my mind, does not detract from the impact of the information. Sugar consumption has been increasing considerably since the early 1900s and there is a correlation of that with the increase in obesity and diabetes both here and worldwide. Now, yes, I know the argument – “correlation does not prove causation” – and there are websites devoted to proving this (if you’re interested, see here: http://www.tylervigen.com/spurious-co…) – but in this case we’re not talking about two completely disparate issues – sugar is a food (or at least an ingredient in food) and obesity and diabetes are related to food in one way or another.

The biochemistry of sugar processing is well known and has been for years – the glucose and fructose are split with the body processing the glucose for energy (or storing it in the liver as glycogen) but fructose is not a usable form of energy for the body to process. It’s the fructose that Taubes points to as the source of the problem. The sugar industry presents a picture where sugar is a harmless product and a calorie is just a calorie. Obesity and diabetes are pictured as diseases of laziness and gluttony rather than a more modern view of them linked to a specific component of our diet. On top of that the medical community has been fixated on the idea that hypertension, heart disease, and others are linked directly to the saturated fat in our diet. Taubes does a good job in pushing that aside and arguing that all of these issues – obesity, diabetes, hypertension, heart disease and possibly other conditions are more directly attributable – in not outright linked – to the dramatic increase in our sugar consumption.

The issue I have with Taubes book however is his lack of academic rigidity on the subject. He continually makes statements such as “It seems that…” or “It’s possible that…” and he needs to be more certain in the science. While I feel that he has certainly convinced me that there is something there that should be investigated much further it requires more that the individual researcher here and there (and there are individuals who are pursuing this – Dr. Robert Lustig, a pediatric endocrinologist at the UC San Francisco medical school comes to mind) to really vet this possible – and plausible – connection.

View all my reviews

No responses yet

Dec 11 2017

MacBook Air issues

Published by under Apple,Technology

I’ve had MacBooks for several years now.  I’m NOT a fan of the latest designs of the hardware by Apple where they make it impossible to upgrade your memory or hard drives on your own without completely replacing the laptop but the older ones have been great.  To the tune that I own three of the them – a mid-2010 MacBook Air 11″, a mid-2013 MacBook Air 13″ and a mid-2010 MacBook Pro 17″.  They’re all running High Sierra which has been a really nice operating system to work with.

Recently I decided to upgrade the MacBook Air 13″ from a 128GB SSD to a 480GB Aura SSD from Macsales.com.  I bought the drive back in January of 2017 but never got around to installing it until a week ago.  And that’s where I discovered some interesting technical issues that I didn’t expect.  Had I originally installed it and migrated my data while that MacBook was running Sierra I would never have encountered these problems and wouldn’t be writing this post – but, such is life sometimes.  Now, this is an ongoing issue that I am trying to resolve but at the very least the MacBook is back up and running now.

I’ve replaced all the hard drives in my MacBooks over the past few years.  I’ve replaced the original 64GB drive in the mid-2010 MacBook Air 11″ with the Aura 256GB drive without a problem; the original 120GB drive in the MacBook Pro was replaced with a 256GB SSD drive; and the time had come to replace the 128GB SSD in the 13″ MacBook Air.  The process is simple enough – unscrew the back cover, remove the old SSD, put in the new one, put back the cover, reboot to the OS X Recovery partition, reinstall and then migrate the data from the old SSD.

Well…that’s how it would have gone had I done this while I had OS X Sierra as the running OS on the system.  With High Sierra there’s a bit of a hitch.  I installed the new drive into the MacBook, replaced the cover and booted it.  Pressing Command-Shift-R allowed me to boot using the Internet Recovery so that I could install High Sierra (rather than Mavericks which is what would normally be used).  When the OS X Recovery finally loaded I went to Disk Utility to format the drive – except that it wasn’t showing up.   I found that odd and immediately thought that this was a defective drive.  Except, when I placed it into an external enclosure and connected it via USB to my MacBook Pro – it showed up just fine.  After a quick Google search I found several references to problems with the OWC Aura drives and High Sierra (see here – the others pretty much point to the same link).  After playing with it a little more I contacted Macsales.com and they recommended that I try the following:

  • Do a PRAM reset on the initial boot after installing the drive
  • Boot Using Command Option R
  • Select Disk Utility,
  • Select View All Devices
  • Quit Disk Utility
  • Open Disk Utility Again (This Step may need to be repeated three or four times)

Well after trying the above the drive was still not showing up (or – if it did show up it only did it in Drive Utility but then when you would go in to try and install MacOS X it would fail the install).  After playing around with it some more and having no luck I decided it was time for an RMA and to ask for a replacement drive.  While discussing everything I had done and putting together the RMA the Macsales tech support happened to mention the idea of resetting the SMC.  I realized that I hadn’t tried that.

Following the tech’s suggestion I reset the SMC and booted the MacBook.  Finally the drive showed up.  I went into Disk Utility to erase the drive and format it and realized that it was going to be formatted as an HFS+ Journaled drive.  The Macsales tech suggested that I should use the command line driveutil tool rather than the GUI Drive Utility.  In fact OWC has a blog post on how to format a new internal SSD in High Sierra and they recommend using the driveutil tool.  Following the steps in the post I was able to format the drive as an APFS formatted drive, rebooted into OS X Recovery and installed MacOS High Sierra successfully.  After rebooting again I migrated the data from my old drive to the new one…I figured I was done.  I was wrong.

The next morning when I booted the machine from a full shutdown I noticed that it was taking unusually long to boot.  After several minutes I watched the machine boot into the OS X Recovery.  I was, to say the least, surprised.  I figured something went south with the drive and that I would have to reboot once more.  I went into the Drive Utility and discovered that the new SSD once again did not show up.  I was even more surprised to see that.  Looks like I would need to reinstall.  To make sure I booted into the High Sierra OS X Recovery I rebooted the machine and discovered that the new drive was suddenly available to the system and it was booting off of it.  After some more experimentation I discovered that if I restarted the machine (i.e. a “warm boot”) then the drive worked fine.  If I shut the machine down fully then when I rebooted it couldn’t see the drive and would go into OS X Recovery.

I rebooted the machine once more and pressed the D key to enter into diagnostics.  After a couple of minutes the diagnostics system came back with the error: “VDH002: There may be an issue with a storage device.”  Really?  You think?  So, I rebooted and the machine came back up fully.  After more trial and error I’ve discovered that if I clear the SMC before I boot the MacBook then it will immediately find the new SSD on the next boot.  However, if I shutdown the Mac fully and try to restart it without resetting the SMC – it will not find the new SSD.  I’m still trying to figure out how that piece fits into this whole puzzle.

No responses yet

Jul 29 2014

Hard Drive Failure vs. UPS Failure

When is a hard drive failure NOT a hard drive failure?  When it’s a bad UPS battery that is dying.  For the past week and a half I have noticed that my VMware ESXi server which hosts three systems for me (2 Microsoft Windows Server 2008R2 systems and an Ubuntu Linux server) was complaining about a corrupt datastore (specifically the boot disk).  While the VMware support site didn’t provide much information on the specific error that I was seeing I felt that it pointed to a hard drive that had bad sectors on it and was on its last legs (mind you, this drive is NOT that old and certainly doesn’t get a lot of activity).  I thought, “oh great – this is going to be fun to fix!”  I had moved the VMs off the server and was about to order a new disk when I then noticed that my APC SmartUPS 1400 was indicating that the battery in the UPS had gone bad (the old “when it rains it pours” adage came to mind immediately).  I figured the battery was not an issue – I’ll just replace it…it’s under warranty (1 year warranty and I bought the battery in September of 2013).  I called up AtBatt.com and spoke with the customer service representative, told them the problem and they authorized the return.  Given that my VMs were crashing (which I thought was due to the ESXi server having a kernel oops and then restarting) I setup a DHCPd server off of my Cisco PIX 501E firewall, enabled it, got the VMs restarted and then disabled the PIX’s DHCPd process (but did not do a “write mem” on the PIX – so in the saved config the PIX DHCPd was set to enabled).

Yesterday, I suddenly notice that I’m getting an IP address from the range configured in the PIX DHCPd server.  I go in and poke around and discover that the PIX had rebooted at 6:11AM yesterday morning.  On top of that my Cisco AP1200 wireless had also rebooted at 6:11AM, and so did my ESXi server (and the event logs were complaining about a corrupt datastore).  Suddenly it occurred to me that the problem was not in the ESXi server (or the PIX or any other network gear) but rather in the UPS.  The UPS was doing a self-test at 6:11AM, the battery failed and the UPS rebooted itself (thereby interrupting power to my entire network stack).  I quickly replaced the UPS with my other SmartUPS 1400 which is still good and everything has been humming along well since (no problems noticed).

This morning I open up the SmartUPS with the bad battery and to my shock I find that the battery is deformed in shape as can be seen from the pictures below.

photo 2 photo 1 photo 3 photo 4

In essence the battery failed horribly and I am quite lucky that it didn’t explode or start a fire!  It took me 15 minutes and the removal of the UPS cover and pulling the case apart a little bit just to get the battery out.  The battery is an Amstron battery and is manufactured in China.  Suffice it to say I am shipping it back today.  Now, I’m supposed to receive a replacement battery from AtBatt but I will also order one from APC.  I am not willing to risk a fire or a battery explosion to save $80.  It’s just not worth it.

No responses yet

Aug 16 2013

Building an ARM-based VM for SDL Training

Before I get into the details of this post I want to provide a bit of a background to this project.  I run the Security Engineering Team for a manufacturer.  We build hardware and software products and my responsibilities include integrating a Security Development Lifecycle (SDL) into our product development process.  I’ve been working on this for about two years, slowly integrating a Microsoft-style SDL into each product’s development lifecycle.  Part of this effort involves developer and architect training.  Initially we chose to use a third-party to provide this training to our developers and kicked off that effort in April of last year.  Unfortunately, the initial training did not go well.  We had significant problems which forced us to re-evaluate our approach to the training.  After the problems we had with the initial round of the Class-Based Training (CBT) we decided to move forward with our own internally developed training.  I mentioned this in a previous blog post that I’ve been focused on putting together training on topics like:

  • Threat Modeling
  • Secure Coding in C and C++
  • Secure Coding in C#/.NET

I went about putting the relevant material for these classes and building out both class presentations as well as hands-on lab materials.  The curve ball I was thrown which resulted in my putting together this VM was sent my way when I gave this class at one of our facilities in France.  I always ask for feedback from the attendees and I got very positive feedback from this class.

About a week after the class was over I got the “unofficial” feedback through one of the guys who works in the Security Engineering Team in Europe.  The gist of this feedback was that while they really liked the labs they felt that it would be more realistic if the labs were done on an ARM-based processor VM rather than the Intel x86 based VMs that I was using.   All of our  hardware products utilize an ARM based processor due to a wide-variety of reasons – not least of which is that they’re embedded devices.

On top of that request the developers were more familiar with Windows (they do all of their development either in Eclipse or, in the case of the guys who develop our .NET based applications, Visual Studio) and they wanted more of a windowing environment like Windows (I don’t know of any version of Windows that runs on the ARM processor).  So, between these two requests I had to start looking at building my own ARM based VM – with the caveat is that it has to run under Windows…yeah…fun!

First thing I had to do was find a processor emulator.  That part was pretty easy – QEMU!  QEMU is a fantastic VM tool – while everyone talks about VMware or VirtualBox or Xen – people tend to overlook QEMU.  QEMU not only provides for building out your own VM that is x86 based but it also allows you to build out VMs that use other processor architectures like ARM, PowerPC, Alpha, SPARC, S390, Motorola 68K, and others.  I mean, this is a really cool tool.

I downloaded the latest version of QEMU (at least it was the latest when I downloaded it): 1.5.1 and installed it on my Windows 7 laptop and then did a search for pointers on how to build an ARM based VM using QEMU.  And boy did I find the links – unfortunately they were all expecting that you would build the VM under a Linux system – not a Windows system.  Some of the links I found that were really helpful were:

Between these sites and some tinkering I was finally able to get the ARM based Debian image built.  However, I had such difficulties with building it under Windows 7 that I finally punted and built it on a spare machine I had in the basement (an HP DL380 2U server with 12GB of memory and 216 GB of hard drive space – you’d think that was overkill but believe it or not the VM took almost half a day to finish building – and then it was a matter of getting the development packages installed!).  I’ll post about the whole effort (and the effort of building the VM under Windows in the next few weeks).

One response so far

Aug 08 2013

Security Development Lifecycle Training

Last year we tried to outsource our classroom-based training (CBT) for our SDL effort to a third party…that didn’t go so well. I don’t want to mention the name of the company we used but we were disappointed enough with the first round of the training and we decided to go our own route.  To that end I was tasked with creating the content for the training…and I have been heads down most of the time this year working on several classes:

  • Threat Modeling
  • Secure Coding in C and C++
  • Secure Coding C#

The Threat Modeling class has been completed (although it could stand to be updated and cleaned up a bit).  The Secure Coding in C and C++ class was completed but the feedback I got from my second group of attendees was that they’re doing development on the ARM processor platform and they wanted to see the exploits in the hands-on lab exercises on that platform.  The Secure Coding in C# is still being built out.

The good news is that I have been able to get a Debian Linux image built (with a GUI interface) for the Secure Coding in C and C++ class using the QEMU ARM emulator.  The next step is to setup the networking so that I can pull additional packages into the image and build out a complete development environment.  This has been driving me crazy for the past couple of months because the installer for the image and the QEMU disk image were constantly giving me problems.  Today was a “Good” day…


No responses yet

May 26 2011

Software Development Lifecycle – Training

Lately I’ve been working on incorporating a Software Development Lifecycle (SDL) in the development processes of a smart grid vendor for their smart grid products.  It’s no secret that everyone (from the vendors to the utilities to the public utility commissions to NERC and FERC) are concerned about the rush to deploy smart grid or – more aptly – advanced metering infrastructure (AMI) systems.  There are many issues that need to be considered when doing an advanced metering infrastructure – internal security at the utility; securing the endpoint devices; the security of the connecting network.  All of these are things to rightly be concerned about.  However, very few smart grid vendors have focused on the builtin security of their software.  I’m not talking about all of the bells and whistles that they provide to secure the AMI infrastructure…I’m talking about the quality of their code.  It’s all well and good to have lots of security features that your customers can turn on and off…but what lurks under the engine?  Buffer overflows? Heap overflows? Cross-site scripting?  Cross-site request forgery? I could go on and on.  To deal with these concerns and potential vulnerabilities I’ve been working on implementing Microsoft’s Security Development Lifecycle (SDL) in our product development groups.  This has been a real challenge given that we previously didn’t worry about such issues since meters (electric, gas, and water) were isolated, electro-mechanical devices that didn’t have two-way (or in some cases even one-way) communication capabilities.  I plan to post updates with implementing an SDL in this blog in hopes that others learn from our experience.

One of the primary components of an SDL is a software security training program.  Developers and development management tend to focus on one thing primarily – writing code and getting it working as fast as possible.  In many cases security is not even an afterthought and even if it is given some consideration many developers don’t have the experience in writing code with security in mind.  This is where a software security training program is essential.  It needs to cover a wide variety of topics such as an overview of the SDL process, secure coding in C/C++/Java/.NET, threat modeling, and secure software architecture to name a few.  In today’s market there are two options in software security training for an organization that is looking to stand up an SDL:

  1. Do it yourself
  2. Outsource

From a “do it yourself” perspective one of the hardest parts is finding people skilled at secure coding within an organization that is already possibly behind the curve on software security.  All content would be developed internally – and there’s the Catch-22 situation: how can you develop the content when your staff doesn’t have the skills necessary to write the content which needs to be taught? In addition to that you will need to setup a learning management system (LMS) in order to track developers as they go through the training which is internally developed (or perhaps bought).

In many cases the only viable alternative is to outsource.  Outsourcing should leverage both instructor-led training (ILT) and online classes.  The only thing to decide is The question is which philosophy do you subscribe to with regards to training: ILT training first with online classes as a reinforcement/refresher or online classes first followed by ILT.  I’ll try and explain both approaches below:

Leveraging ILT before going proceeding to online training is based on the idea of getting the most, in-depth training upfront is the key component of the training and the online classes are just there for reinforcement of the material learned in the ILT classes.  In addition the online classes can be used as refresher classes after some specified period of time – say, approximately a year – after the initial ILT/online classes have been taken.  The trick is that the online class content needs to be updated during that time…otherwise it becomes stale and loses value for the developers.  The big benefit here is that you put a lot of effort upfront to get your developers trained and can leverage that training as soon as possible.

Flipping the sequence around has the online training occurring before the ILT classes.  The philosophy here is that the developers get a broad knowledge of the SDL and its various components and then you’re able to focus the ILT more effectively to provide the attendees a class that explores the content more completely.  One of the big benefits to this approach is that the developers get a broad education in what an SDL is and what steps are part of the overall process.  This allows you to provide some training to all of your developers (of course that depends on how many seats you buy for the e-learning system) and to take those who are key and provide them the ILT first.

It’s hard to say which is the better approach – too many factors to consider: cost and schedule being the primary ones.  It is my belief that both approaches are equally valid.  I would also stress that it depends on how big your developer population is and how quickly you need to get some training started.  From my own perspective I think the idea of starting the e-learning first and then moving to an ILT is more effective – it allows for your developers to all start at the same knowledge level before having them go through the ILT.  It also doesn’t prevent you from using the e-learning later as a refresher for the material that the learned in the ILT.  I’d be interested in hearing other’s thoughts as well on this.

2 responses so far

Mar 30 2011

What is NERC CIP Compliance?

One of the biggest challenges I find talking to customers about Smart Grid/AMI deployments is answering the question “Is your solution/architecture NERC CIP compliant?”  It’s somewhat frustrating since there’s no certification (which is typically the direction the question originates from) that I can point to – you know, something like a UL or Windows Hardware Quality Lab (WHQL) testing – that says the solution or architecture is NERC CIP compliant.  In many cases I have to spend a fair amount of time resetting customer expectations in what NERC CIP (and in many cases what the NIST IR 7628) really requires.

The NERC CIP is a subset of the NERC Reliability Standards and is comprised of a group of nine standards specifying security requirements utilities must meet.  The standards include the following:

Version 3 of the NERC CIP standards is currently in effect.  The above links actually point to the version 4 documents of the standard as they have passed the ballot vote although a mandatory effective date has not been set yet.

Each standard covers a different domain but the common feature among them is that the standards are focused more on process and policy versus actual technology.  Does that mean that they don’t touch the technology?  No…by no means.  In fact some of the requirements in specific CIP standards (specifically we’re talking about CIP-002, CIP-003, CIP-005, CIP-007 and CIP-009) can be met based on the technical capabilities in the Smart Grid/AMI system being deployed.  That being said, however, the standards do not specify specific technologies that are required.  For example, the PCI-DSS standard requires that organizations use firewalls and anti-virus as part of their technical controls.  That’s a clear case where the standard specifically states what technical controls an organization must use in order to be compliant. NERC CIP does not do that directly.

NERC CIP is written more broadly than standards like PCI-DSS.  In many cases the wording leaves it up to the utility or the industry to decide how the standard is applied and interpreted.  But as far as technical controls mapping directly – while there are ways to claim that an AMI or Smart Grid system meets specific requirements within the NERC CIP standards based on those technical controls, there is no way to simply say “yes, it is NERC CIP compliant.”

2 responses so far

Mar 24 2011

Smart Grid Security Vulnerabilities?

Published by under Security,Smart Grid

I’ve been working for Itron for the past 14 months of which the last 5 have been as the Security Engineering Team lead for the company.  I need to keep abreast of current security trends in terms of the Smart Grid industry (I’m not going to go into the discussion of Smart Grid vs. AMI at the moment) and every so often I come across some rather glaring mistakes in information that, if not corrected, can lead to significant, unnecessary concerns about the security of Smart Grid or AMI deployments.  Normally I’m not that picky about correcting such mistakes but this one, in my opinion, needed some response as opponents of Smart Grids could use this as part of their arguments against Smart Grid technology and deployments.

Case in point is Guido Bartels‘ “Combating Smart Grid Vulnerabilities” article in the March 2011 issue of the Journal of Energy Security.  On the whole, this article is spot on.  I think Mr. Bartels does an excellent job in laying out the case for the efforts being done to secure Smart Grid deployments by utilities and by vendors as well.  I only have one small issue with the article and that is the incorrect use of a graph titled “Number of New Smart Grid Vulnerabilities”.  This graph, developed by IBM‘s X-Force can also be found here.  This graph is actually a histogram of the number of new vulnerabilities identified by IBM’s X-Force Research and Development team over the period of 2000 to the first half of 2010.  Unfortunately it is incorrectly labeled in the article and I hope that the editors will do their readers a kind service by correcting the faulty title of the graph.

No responses yet

Oct 06 2010

Hello Again world!

Published by under Thoughts

Well, after a few weeks of being down I’ve finally gotten my blog back up (although the website isn’t back up yet).  I’ve moved the site over to BlueHost since I got tired of PEPCO power outages taking my network down.  All in all I have to say that I do like BlueHost – it just takes a bit of getting used to and to letting some control go.

I still have a few things to fix (restoring screenshots to certain posts, restoring plugins to WordPress, etc) but I’ll get those ironed out over the next couple of weeks.

No responses yet

Next »