Sunday, January 31, 2016

Free cyber attribution services for the masses!

Press release, for immediate distribution
Today, Rendition Infosec announces the launch of a market leading cyber threat attribution service.  If you need cyber attribution, but can't be bothered with actual work, Rendition Infosec has the answer for you.  Cyber skills NOT required!  Even the least technical managers will feel comfortable using Rendition's new automated cyber threat attribution website www.whohackedus.com.  That's right: if you can use a web browser, you too can do cyber attribution.


How does Who Hacked Us™ work?
By using the Who Hacked Us™ service, you'll be assured the latest in quality attribution data, vetted by our crack team of srand() generating PHP scripts.  Our analysts ingest hundreds of terabytes  of threat data every day and bulk-label it irrelevant for random attribution purposes.  Instead of worrying about that "complicated" and "confusing" data, we instead rely on choosing random data from a series of pre-populated values.

But can I trust random attribution data?
Of course you can.  Would we steer you wrong?  Wait, don't answer that.  Just blindly trust that our service uses only the best randomly generated attribution data, which sadly almost as reliable as the real attribution data we see.  Doubt that we're correct?  Just remember that the now seemingly defunct Norse spoke of port scans from Iran as foreign nation state attacks.  Were they?  Not a freaking chance.  And Norse isn't the only vendor peddling shoddy attribution data.  So Rendition figured "hey, there must be some money there."

Should we invest in this service? Are there competitors?
ThreatButt already has the mother of all Cyber "Pew Pew" maps.  While we could have competed, Rendition instead decided to enter the market of automated cyber threat attribution.  Technically, Rendition created the market, leading market forecasts above all other vendors in the supernatural hemisphere (we'd call it something else, but we think Gartner laid claim to that "other" term and we don't like lawyers).

Device Compatibility
The Who Hacked Us™ automated cyber threat attribution service works on any device with a web browser, including Windows, OSX, iPhone, Android, BlackBerry, Linux, and some IV infusion pumps.

How do I invest?
Simple - you can follow us on Twitter (@whohackedus) or send us an email (whohackedus at renditioninfosec.com) and provide great suggestions for other data that you would find useful for the random cyber attribution service.  As always, sharing the service with your friends and colleagues will be most appreciated.



Blatant Disclaimer: We're not idiots; this whole thing is an obvious spoof.  It's (really) sad that we feel we need this disclaimer here, but there are lots of paste eating morons on the Internet.  This post (and the site) pokes a little fun at the threat intelligence industry as a whole, many of whom are absolute charlatans. If you can't take a joke, you probably need to find work in another field.  If on the other hand, you want to build a REAL threat intelligence program (which is way more than just an IP blacklist), contact us and we'll be happy to help you.  We've helped build several programs for very large (multi-billion dollar) organizations all the way down to small/medium enterprises.

Friday, January 29, 2016

Just because you can, doesn't mean it's legal

Just because you have the technical ability to do something, that doesn't make that thing legal.  I came across this text in a description for Paraben's iRecovery USB stick.  I don't think Paraben has anything to do with this horrendous description.  Software descriptions like these are bad for our industry as they make forensic professionals sound shady.


Even if (as the description claims) "sometimes you just have to know," knowing may not be strictly legal.  If you access your boyfriend's phone without  authorization, you are probably breaking the law.  If you have to shoulder surf their pin to unlock the phone, you almost certainly are.

What about accessing a backup file on a computer and parsing through that?  It probably depends on the computer.  What is the location of the computer?  Is it located in a common area?  Does everyone use the same login or are there individual logins for each user?  In general, computers with shared accounts in a shared location get less protection than those that are locked in the owners bedroom, have no shared accounts, and are always protected by a password.  But I am not a lawyer and if you are not sure you should seek advice from one.

Be careful before you decide to image a machine that doesn't explicitly belong to you.  This happens regularly with BYOD when employers overstep and think that because a device is connected to the corporate network, they have permission to do whatever they want.  Depending on your employment agreements, this might be true. But again, consulting with your legal team is ALWAYS the right thing to do.   I can't tell you the number of times Rendition Infosec has been asked to image machines that don't strictly belong to the requesting organization.  Some clients are really annoyed when we ask for questions about authorization, ownership, etc.  But remember: just because you can, doesn't mean it's legal.

Thursday, January 28, 2016

Centene Corp loses health information for 950,000 patients

Ask yourself: are your IT policies promoting unsafe use of external drives?  What's the potential cost?

The Centene Corporation announced that it has lost health information for about 950,000 patients, though with a nice round number that high I get the feeling they don't really know.


Recently, it's been reported that your health data is worth more to criminals than stolen credit cards.  Many times criminals will use this data to perpetrate medicare or insurance fraud to cash out.  In any case, once your health data is out there, you can't get it reissued like you can a credit card.

Although Centene doesn't elaborate on the type of hard drives lost, I'd like to suggest that these were probably external drives.  At Rendition Infosec, we certainly see lots of external (mostly USB) drives attached to computers at client sites.  And quite frankly it makes us cringe.  But why would anyone want to store sensitive data on an external drive?  After all, they are usually very slow and have higher failure rates than internal drives.  And let's be fair, they are 1000% more likely to be lost/stolen than internal drives.

In interviewing clients who rely on external drives, we usually find that they are using them for convenience.  In many cases, getting corporate approved storage on a NAS or a SAN is very expensive for the department.  And when IT provisions corporate machines, they usually have very small hard drives.  The latter makes sense, we want users to save files on the corporate storage where it is centrally backed up and large workstation hard drives work against that.

But at one recent client, we found that the cheapest a department can purchase "corporate approved storage" for is $2,000 TB.  Let that sink in for a second.  At $2k/TB, you can totally expect people to look elsewhere.  A quick check on Amazon shows that I can buy a 5TB USB hard drive for less than $130.  Compared to what this client's IT department charges for storage (albeit faster, accessible, and more secure) that's a savings of $9,870.  You don't need a PhD in psychology to know what most people will do when faced with such a price disparity. Of course, that external USB drive is more likely to be lost than most NAS devices and since it was deployed by an end user, it is probably not encrypted.

The Centene breach should highlight the need for sensible IT policies. If your IT policies are driving your users to unsafe behaviors, work with the organization to change those policies to drive users to sensible, secure behaviors.

Wednesday, January 27, 2016

Veteran's Administration pays "special attention" to whistleblowers

Government organizations have to deal with whistleblowers - it's a fact of life.  It's also against the law to discriminate against whistleblowers due to the Whistleblower Protection Act.  But the act only prevents retaliation or threats of retaliation.  Is additional surveillance allowed?  Maybe - don't take legal advice from me, I am not a lawyer.


But the VA lawyers may soon have to answer this question.  The VA was caught "diverting" mail sent from certain employees - those identified as whistleblowers.  Members of Congress now want answers about this.  The VA responded that the mail was not actually being "diverted" but rather was being given special priority so that known whistleblowers could have their mail to senior executives answered as quickly as possible.  Um, right. I was born at night but it wasn't last night.

Regardless of your stance on the political issues surrounding the VA fiasco, there are some lessons to be learned here.  Left to their own devices, systems administrators often take unilateral actions they believe will help the organization.  These often take the form of surveillance.  Unfortunately, not all of these actions are in keeping with the law and/or corporate policy.

At Rendition Infosec, we recommend that organizations get help from their internal legal counsel to review their information security policies.  A quick chat with lawyers today can save you real headaches later if there is confusion about what is and isn't legal.  I strongly suspect that the VA didn't take this step.  Once policies are approved by legal, it's a good time to communicate the policies back to rank and file workers.  When mistakes happen (and they will), don't cover them up - use them as case studies to educate the workforce.

On a personal note, as a service connected disabled veteran, I applaud VA whistleblowers.  The VA has gone far too long with actions nearly unchecked by the legislature.  It's high time for a change.  Even if extra special monitoring for whistleblowers is strictly legal, it stinks to high heavens and I expect heads to roll.

Monday, January 25, 2016

The IRS, taxpayer data, and evidence spoliation

According to an article from Americans for Tax Reform, the IRS has admitted that it wiped a hard drive containing key data for a FOIA request, despite a judge's order to preserve it.  Surprisingly, the FOIA request came from Microsoft and was in response to what appeared to be shenanigans in hiring outside counsel for a tax auditing case at a cost of $2.2 million to the taxpayers.  The firm was charging $1000/hr despite having no actual experience in the subject matter.  The FOIA request was investigating corruption and the loss of the digital evidence may now make it impossible to determine the circumstances under which this firm was hired.


The IRS has failed to preserve digital evidence in other cases as well.  Previously it destroyed evidence on more than 400 backup tapes containing potentially incriminating email.

If you work in private business, you should definitely take heed.  Unlike the IRS, if you fail to preserve evidence after being ordered to do so, there will very likely be substantial consequences.  A spoliation ruling from a judge is devastating to your case in any lawsuit.

At Rendition Infosec, we always advise clients to develop policies surrounding litigation hold.  In any large company, it's only a matter of time before you are served with a preservation order/litigation hold.  Getting the response wrong can cost you big time, so in addition to creating a policy, organizations should also test the policy in tabletop exercises.  Like so many things in infosec, this is a place where an ounce of prevention can be worth a pound of cure.

Saturday, January 23, 2016

Adobe files DMCA takedown notices to users sharing MacWorld article

Adobe filed DMCA takedown notices to Facebook for anyone who shared an article from MacWorld noting that Creative Cloud had been hacked and pirated.  The article (linked here) does not in any way encourage users to pirate Creative Cloud or provide them instructions for doing so.  It simply reports on the fact that pirated versions are available, highlighting the futility of considering software more secure simply because it runs in the cloud.

The DMCA takedowns were issued to Facebook more than two years after the article was shared.  Although there really is no damage here, they are very late. The "damage" is already long ago done.


You can almost feel the anguish from users like Susie Ochs who now have to jump through hoops to get their Facebook accounts reinstated.  While some just use their social media accounts for catching up with friends, Editors like Mrs. Ochs use them professionally.  Adobe is a large enough company to know better than to file this DMCA notice.  It is also pretty clear that rather than this simply being an inconvenience, lack of access to her Facebook account will have an actual financial impact for Mrs. Ochs.


Facebook went so far as to remove the share button entirely from the article.  But that hasn't stopped people from tweeting and retweeting it.


The security community has largely been at odds with the DMCA since it was originally passed into law.  The DMCA is pretty horrible legislation and simply allows the alleged copyright holder to issue takedown notices for anything they believe to be infringing.  The problem is that there is usually little verification performed to ensure that the material listed in the takedown notice is actually infringing.

Facebook clearly didn't do any verification for the MacWorld article after receiving a notice from Adobe.  Now, users who shared the allegedly DMCA infringing material have had their accounts locked and must jump through hoops to get them reinstated.

I used to work with someone who got a DMCA takedown notice for sharing open source graphics libraries on his website.  Yep, you guessed it. He was 'infringing' on the X Files by sharing an archive containing X11 files.  The problem of course is that the burden of proof is on him and not the party claiming infringement.  To compound matters, the reporting party gets nearly complete impunity against mistakes.  As written today, DMCA threatens every security researcher and tech reporter.  It's time for a change.

I always like to provide some actionable recommendations in my posts.  Here are a few:

  1. Educate those you come in contact with on how DMCA works. Most legislators who could change the law simply don't understand how horrible it is.
  2. If you are hit with a bogus DMCA notice, file a counter notice. Don't just take it. Someone has to stand up to the schoolyard bully.
  3. If your organization is considering issuing DMCA takedown notices, try to draft policies surrounding the verification of the infringing material. Adobe is getting nothing but bad PR from this - you can learn from Adobe's mistakes.


Friday, January 22, 2016

Is RSA really still relevant?

With so many information security conferences springing up, those new to the industry often have a hard time knowing which ones are worth attending.  For years, two security conferences stood out as forerunners, at least in the commercial space: RSA and Blackhat.  While Blackhat catered to the more technical crowd who likes to get into the bits and bytes, RSA catered to a slightly less technical group.  I've attended and spoken at both Blackhat and RSA and they have both served their purposes as fantastic (if very commercial) security conferences).

Has RSA gone off the rails in 2016?
RSA made some questionable decisions in their keynote selections for 2016.  They are apparently creating a CSI: Cyber panel with actors from the show.  The series creator and executive producer will also be speaking there.  I saw Colbert a few years ago at RSA and while I initially questioned his selection for a keynote, he was fantastic.  He prepared well and was spot on with some insightful remarks about the industry.  We should expect nothing less from someone who testified in front of Congress and started his own Super PAC.  He may have started as a comedian, but by the time he spoke at RSA, he had transcended that. But of course, because he's Colbert he was funny.  Damn funny.

But CSI: Cyber?  Seriously?!  What do two actors and a producer from CSI: Cyber really have to offer attendees?  I frequently point out to my infosec brethren that when shows like CSI: Cyber paint an unrealistic picture of our craft, they do us significant harm.  When Abbie and McGee share the same keyboard on NCIS to stop hackers, we look stupid.  And technotards (like my mom) think this is what I really do for a living.  The reality couldn't be much further from the truth.

Two idiots, one keyboard

Of course when Scorpion rebooted the air traffic control system with a sports car and a jumbo jet, the public were fortunate to learn how computers and hacking really work.  Sorry. I threw up in my mouth a little there.  Okay, a lot.

 
Are you kidding me?

But just when you think it can't get dumber, CSI: Cyber comes along.  Note that in this scene, one of the keynote speakers identifies malware by separating green and red code.  I have no idea how this is supposed to work, but people believe you can do this.  Just ask any consultant who has been in the field.  Unrealistic expectations about our abilities abound.

Because all malware code is red?

Want to see the two episodes of technical jargon compiled into 3 minutes?  This video shows how often we get to hear the words deep web, zero day, code, and hacker appear in the two videos.  Bottom line, it's out of control.

More techo jargon please...

This video introduces the sure to be a hit term "son of a backdoor hack."  We also learn that if we can pinpoint the backdoor frequency, we can track it.

Well "son of a backdoor hack" this show is insulting

Overall, CSI: Cyber is a disgrace to our profession.  It is doing nothing to elevate our profession in infosec.  RSA should not be giving the CSI: Cyber actors a stage to interact with the REAL heroes of infosec who actually keep us safe day in and day out.

This year, I don't have to worry about whether I should attend RSA.  I'm teaching real infosec crusaders advanced exploit development in London and have a schedule conflict.  But with cruft like this, I'm not sure I'd go even if I could.

Look, Hugh Laurie (Dr. House) never gave a keynote at a major medical conference.  Wesley Snipes never addressed West Point, despite being a general bad ass in his movies.  This idea that we should have actors addressing RSA is ridiculous.  It would be one thing if CSI: Cyber were elevating our craft, but they surely are not.  If anything, they are hurting it.

Twitter passwords? Seriously?!
While polishing this post this morning I learned that RSA was apparently asking attendees for their Twitter passwords so it could tweet on their behalf.  This is insultingly stupid for a security conference.  Everybody knows you should never give your password to anyone, that's what OAuth is for.  But despite that, many did give away their password.  I take this to be another indication that RSA is pushing itself into obsolescence. 

At this point, I'd like to introduce a new theory.  Since RSA developers are very likely to understand the need to use OAuth rather than plaintext passwords, I submit that they didn't code the faulty registration form at all.  I submit that it may have been the CSI:Cyber actors - because "son of a backdoor hack", requesting plaintext passwords for external accounts about lines up with their infosec prowess.

Thursday, January 21, 2016

$54.5 million stolen in probable phishing scheme

Today we learned that $54.5 million was stolen from the aerospace manufacturing company FACC. The company manufactures parts for Boeing, so it would normally be considered to be at risk for IP theft rather than financial crime, but the latter appears to have happened here.

The announcement from FACC reads:
On January 19, 2016 FACC AG announced that it became a victim of fraudulent
activities involving communication- an information technologies. To the current
state of the forensic and criminal investigations, the financial accounting
department of FACC Operations GmbH was the target of cyber fraud. FACC's IT
infrastructure, data security, IP rights as well as the operational business of
the group are not affected by the criminal activities. The damage is an outflow
of approx. EUR 50 mio of liquid funds. The management board has taken immediate
structural measures and is evaluating damages and insurance claims.
Earlier FACC noted that they had contact authorities in the matter. 
Today, it became evident that FACC AG has become a victim of a crime act
using communication- an information technologies. The management board has
immediately involved the Austrian Criminal Investigation Department and engaged
a forensic investigation. The correct amount of damage is under review. The
damage can amount to roughly EUR 50 million. The cyberattack activities were
executed from outside of the company.
This announcement had a real world impact on FACC's stock price, although the stock is rebounding some this morning.

FACC Stock Graph

Although details on the exact mechanism of theft are light in this case, at Rendition Infosec, we predict that the initial intrusion vector was probably phishing.  We frequently see fraudulent invoices sent to companies, many of which are paid.  These attacks (usually for smaller dollar amounts than seen with FACC) are often paid and then only discovered later during an audit.

With FACC's misfortune in the news, today would be a great time to reinforce phishing awareness to your employees.  Don't think it can't happen to you too.  If you don't have a phishing or general infosec awareness program already in place, contact me and I'll be happy to help you set one up using proven techniques used across many of our clients. 

Wednesday, January 20, 2016

In IR reporting, precise language matters

Earlier I wrote about the Affinity Gaming v. Trustwave lawsuit over improper Incident Response (IR).  In this second installment of the lessons learned from the lawsuit filing, I’ll focus on using precise language in IR.  Precision in your choice of language matters. It matters a lot.  When you are sure about something, you need to say you are sure.  When you are not sure, say so too.


Choice of language is so important in infosec reporting that I spend a good deal of time talking about it in the SANS FOR578 Threat Intelligence course.  We focus on the proper use of estimative language to differentiate between what is known and what is believed. We also focus on the use proper estimative language.  If you’re not sure, how confident are you in a particular assessment?  We then use that knowledge to critique vendor threat intelligence reports for the good, the bad, and the downright ugly.

Putting it to work
I’m going to put some of that knowledge to use here to evaluate some of the language in the complaint.  Let’s examine this excerpt:
Trustwave also stated that it “believe[d] that the attacker became aware of the security upgrades that were taking place and took several steps to remove both the malware and evidence of the attack itself. Almost all components of the malware were deactivated and/or removed from the systems on October 16, 2013. This activity ended the breach.”
There are a few things seriously wrong with this.  First, there’s no evidence backing the assertion that the attackers “became aware of security upgrades.”  I’ll let that go for a moment since this quote was probably taken from the executive summary and the evidence could be elsewhere in the report.  On a more pragmatic note, even if attackers had become aware of security upgrades, in more than a decade of IR work I’ve never seen attackers run away from a target.  They may adapt their tactics, but I've never seen them run away from a good payday.

A second obvious problem with this excerpt is that the report says “Almost all components of the malware were deactivated and/or removed” (emphasis added).   Without needing to know any of the details, this statement is inconsistent with the statement “This activity ended the breach” which follows immediately after.  What does "almost all" even mean in this context?

Horseshoes and hand grenades
Completing a successful remediation is not like horseshoes or hand grenades – close (e.g. “almost all”) doesn’t count.  Miss even one machine, and attackers will use that foothold to reinfect other machines in the network.

Close counts with these - not IR reporting.
It probably sounds now like I’m second guessing Trustwave’s actions here.  In my earlier post, I said I wouldn’t do that, and I won’t.  This is purely a critical review of the language that was used in the portions of the report that are available as part of the legal filing.  This is all about language, not about the IR actions.

Either framepkg.exe is malware or it isn't - you can't have it both ways...
Other language inconsistencies are highlighted by Affinity’s counsel in the following excerpt.
Trustwave’s report contains inconsistencies regarding the ongoing existence of malware on Affinity Gaming’s systems. As noted above, in its “Incident Dashboard” Trustwave asserted that the malware had been removed. However, elsewhere in the Report, Trustwave states that one particular piece of malware, Framepkg.exe, was still “running on the system at the time of acquisition” which occurred on November 1 and 2, 2013.
If Framepkg.exe is indeed malware (that seems likely) and Trustwave knew this (as claimed in the lawsuit filing), then the declaration that the malware was removed is clearly not accurate and is again a symptom of imprecise use of language.

Might the failed remediation be IT’s fault?
The Framepkg.exe inconsistency might be sloppy work.  But let me offer another possible explanation.  It is doubtful that the Trustwave consultants would have been responsible for actually cleaning the FramePkg.exe malware from the impacted server.  Confucius says “cleaning malware is a dangerous game, better to rebuild infected servers than try to clean.”

Once again, Confucius never really weighed in on IR best practices.
*In fact, nobody should have “cleaned” the malware.  Once a server is compromised, rebuilding the server is the only real option.  I’ve been pretty clear on my opinions about this,  see my Shmoocon presentation with Mark Baggett titled Wipe the Drive.

Okay, so we know that one of Affinity’s system administrators should have rebuilt the server as part of the remediation.  But did they?  I’ve worked several IR cases with Rendition Infosec where recommendations were passed to IT, where they were ignored or only partially completed.  In many cases, IT reported back to management that all recommendations had been implemented completely.  Based on my experience, I submit that it is at least possible that Trustwave notified Affinity IT of the need to rebuild the server and they simply failed to do so.  If this is the case, it would change things in the lawsuit substantially.

Trust but verify
This leads to another key recommendation for IR remediation - someone independent of IT should audit to ensure all recommendations were actually completed. We recommend this every time, but some clients have misplaced confidence in their own IT and choose instead to skip this step.  This is probably rooted in the desire to complete the IR as quickly as possible, but we all know that speed and quality are often at odds with one another.

What precisely is an "inert" backdoor?!
One final note about imprecise language comes from this excerpt explaining that a “backdoor component appears to exist within the code base, but appears to be inert.”  A backdoor either exists in the code or it doesn’t.  The use of the word “appears” here is not appropriate.  The statement that the backdoor “appears to be inert” is also very misleading.  Perhaps what they really meant to say is that there is no evidence that the backdoor was used. 

Closing thoughts
Using precise language is always important when writing reports in IR (or any infosec discipline for that matter). Choosing your words carefully can make all the difference when defending your investigation.

Tuesday, January 19, 2016

FDA issues draft guidance for updating med device software

This month, the FDA issued a bulletin titled Postmarket Management of Cybersecurity in Medical Devices.  The draft guidelines are 25 pages long and honestly don't contain anything earth shattering for those of us in the infosec profession.
"Severity to health" matrix
Of course the problem is that many medical device manufacturers don't really understand security.  How do I know?  At Rendition Infosec we regularly find crazy misconfigurations on medical devices.  Things like unauthenticated CGI, unauthenticated telnet, web servers running as root, hardcoded passwords, etc.  As basic as these recommendations will seem to infosec professionals, they are sorely needed in the medical device manufacturing market.

Some of my favorite parts of the recommendations include establishing a vulnerability intake process, performing monitoring for information about discovered vulnerabilities to their devices, and adopting a coordinated vulnerability disclosure process.  In working with several medical device manufacturers over the last few years, I have not seen a single one that performs any of these recommendations consistently.

If you don't want to read 25 pages of draft recommendations, here's the TL;DR version:
These programs should emphasize addressing vulnerabilities which may permit the unauthorized access, modification, misuse or denial of use, or the unauthorized use of information that is stored, accessed, or transferred from a medical device to an external recipient, and may impact patient safety. Manufacturers should respond in a timely fashion to address identified vulnerabilities. Critical components of such a program include: 
  • Monitoring cybersecurity information sources for identification and detection of cybersecurity vulnerabilities and risk;
  • Understanding, assessing and detecting presence and impact of a vulnerability; 
  • Establishing and communicating processes for vulnerability intake and handling;
  • Clearly defining essential clinical performance to develop mitigations that protect, respond and recover from the cybersecurity risk;
  • Adopting a coordinated vulnerability disclosure policy and practice; and
  • Deploying mitigations that address cybersecurity risk early and prior to exploitation  
While I generally despise government legislating cybersecurity standards, this is probably a move in the right direction given the reckless handling observed to date with device manufacturers.  Also, don't forget that these are still in draft form.  If you don't like the recommendations, feel like something is missing, or that they have made a horrible error, submit your feedback to the FDA before they are made binding recommendations.

Monday, January 18, 2016

Kiev Borispol airport in Ukraine attacked

The Kiev airport was the target of a cyber attack that was discovered over the weekend, according to sources in the Ukrainian government.  Reuters reported that the infected network included the airport's air traffic control system.

The best information available on the attack, while limited, comes from foreign articles.  This translated article states that Black Energy was detected in the network.
Due to the situation that has developed with cyber attacks on power companies, IT-specialists of the airport carried out an inspection of information systems and computer companies. The virus was quickly found and localized, which helped to avoid possible damage to data or damage to the company. This fact was notified State Service of Special Communication and Protection of Ukraine (cert-ua)
Ukrainian government sources also made some bold speculation, indicating that the use of Black Energy malware "may indicate a purposeful actions and sabotage on the part of the Russian Federation."

Another quote from Ukrainian government sources indicated that the command and control (C2) server was in Russia.

When working incident response and attack attribution at Rendition Infosec, we always caution clients that attribution is difficult and improper attribution is common.  Things are not always what they seem.  In this case, the C2 server is in Russia - but that certainly doesn't mean that Russian actors are responsible.  The server could be a legitimate business server in Russia that was hacked by a foreign group.  It could also be a VPS, payed for via Bitcoin.

Until more supporting evidence is released, it is difficult to conclusively attribute this to Russia.  We always recommend that attribution not be made based on IP addresses alone.  In fact, unless the Russian hackers want their activity attributed to them, the use of the Russian C2 server points away from Russia being involved.  Much better attribution data will likely be collected as part of a larger incident response operation.

One thing is for sure - 2016 is going to be a fun year in Ukrainian cyber security.


Friday, January 15, 2016

Trustwave being sued for faulty IR work - who will be next?

Blatant Disclaimer
Edit: Part 2 of this series is posted here.

Before I start, let me say that I had to carefully consider how to word this post.  I talked to some peers who advised that I not write this at all.  After all, who among us can say that we did everything perfect in every incident response we've worked?  I certainly can't.  I'm not going to Monday morning quarterback the work of my industry peers and you shouldn't either.  But after careful consideration, I decided that there are lessons learned in examining the legal filing.  I'll break this into several posts over the next week since there's a lot to cover.


Case background
The case background starts something like this: Affinity Gaming was notified that their payment systems were likely breached.  Like most, the breach was detected through external means.  Affinity contacted their insurer who told them to find a PCI forensics firm and supplied them a short list from which they selected Trustwave.

Trustwave investigated, completed their investigation, and issued the results of the forensics investigation.  They also made several unspecified recommendations to Affinity for securing their systems and preventing future attacks.  Months later, Affinity discovered that attackers were still in the network.  Affinity hired Mandiant who discovered that the original attack had never been remediated and that several systems investigated by Trustwave were infected with malware (which they failed to detect).  Mandiant also advised Affinity that the unspecified recommendations from Trustwave would not help secure their network.

Today's analysis
Today, I'm going to focus on scoping.  The first thing that hit me in the legal complaint is that the plaintiff, Affinity Gaming, asserts that they  are not infosec experts.  In fact, Affinity asserts this is why they hired Trustwave to perform an incident response.  This is an excerpt from the filing supporting the claim of "Constructive/Equitable Fraud."
Trustwave knew that Affinity Gaming needed to rely and did rely on Trustwave’s claimed specialized knowledge, experience and qualifications, and on information supplied by Trustwave, in making decisions on engaging Trustwave to investigate, diagnose and remedy or contain Affinity Gaming’s data breach, and in believing that Trustwave had in fact diagnosed and remedied or contained such breach, because Affinity was not able to detect the falsity and incompleteness of the information supplied by Trustwave;
Elsewhere in the filing, Affinity notes:
Affinity Gaming trusted, and was dependent on, Trustwave’s assessment on what the proper scope of its engagement should be, given Trustwave’s data security expertise, and in no way limited or restricted Trustwave’s investigation of Affinity Gaming’s data systems. 
I have no inside information here so we won't really know what happened until facts are presented at trial.  What's I will note is that scoping any engagement is important. During an incident, the client always wants to get back to normal operations in the shortest period of time for the lowest overall cost.   This lawsuit provides an example of the need to clearly communicate the scope required to resolve the incident.  Trustwave may have done this, but Affinity asserts they did not.

Many times, I notice that consultants are afraid to tell clients the hard truth.  At Rendition Infosec, I've worked several incidents where clients have a strong desire to say that they have remediated the incident completely when they haven't scanned all machines on the network for the indicators of compromise (or have done so ineffectively).  In some more extreme cases, clients have said that all machines on the network are clean without even having a device inventory or understanding how many machines they even have.  After all, even Confucius* says "you can't investigate compromises on machines you don't know you have."

* Actually, I made that up.  I don't think Confucius ever weighed in on incident response. 

Give it to them straight
Over the years, I've lost some business by telling it to clients straight: "I know you wish the incident was over - but you are nowhere near done investigating. You can tell the board whatever you want, but you won't get a clean bill of health from me until we've completed a thorough investigation in accordance with industry norms."  Many consultants and employees on internal teams are afraid to do this and upset management.

When discussing this with an industry peer that I know and trust, she said "there's no right answer here.  Either way you risk losing the client."  I respectfully disagree. If you tell the client the hard truth that the scope is larger than they desire you do indeed risk losing the client.  But this filing shows that if you don't tell the client the proper scope (and stick to it) you not only risk losing a client, but also being sued.

So what was the scope?
The short answer is that we don't currently know.  However, this excerpt from the filing provides some clues about the initial scope:
In its PFI Report, Trustwave defined the “initial scope of the engagement” as inspection of only 10 servers and systems and Affinity Gaming’s “physical security” and “network topology.” 
Depending on the background (which we are not privy to yet), this scope seems adequate. However, once Trustwave determined that servers were infected with malware, they should have at a minimum determined if the same malware (or malware variants) were installed elsewhere in the network. This is apparently where things broke down, because according to Affinity the scope was never changed to reflect the changing environment with the initial discoveries.
Despite indications that Trustwave should have expanded its scope of engagement – such as Trustwave’s suspicion of a backdoor component, and identification of an open communication link that led outside of Affinity Gaming’s systems – Trustwave did not do so, nor did Trustwave recommend any such expansion to Affinity Gaming. 
If this goes to trial, we'll learn what recommendations were made by Trustwave and when they were made.  Whether you are a consultant or work on an internal team, you can learn from this.  When new knowledge is gained, the situation changes.  When the situation changes, you need to re-evaluate the scope of the IR.  And based on this lawsuit, I'd certainly advise making sure that there is a written record of those discussions about re-scoping the incident.  You never know who will be next.

Thursday, January 14, 2016

If you thought Hello Barbie was creepy, wait until you hear what NBC is doing...

There are some pretty unnerving Orwellian undertones in NBC's announcement of supposed Netflix viewing numbers.  Netflix doesn't post viewing numbers, so it's hard for traditional media corporations like NBC to understand how Netflix original programming is doing compared to traditional broadcast media.  If you thought Hello Barbie was creepy, just wait until you see what NBC cooked up to track Netflix viewing numbers.
At least Hello Barbie's necklace lights up so you know when she's listening
NBC reportedly solved this problem using software from Symphony Advanced Media.  The Symphony software passively monitors the environment around smart phone microphones that have certain applications installed.  The article doesn't make it clear what those applications are, how they are installed, or by whom.  The Symphony website isn't much help in determining this either.  But in order to capture this data, the applications would have to listen all the time.  Hello Barbie stoked earlier privacy fears, but at least she requires you to press her belt buckle to start listening   She also lights up and there's an audio warning chime before she begins to listen.

In any case, NBC used the software to detect the viewing habits of Netflix viewers by listening for theme music of Netflix original shows.  The reported numbers were in the millions of viewers, though this data was probably extrapolated from a much smaller sample set of viewers.  Though Symphony rents a panel purporting 15,000 people, it seems too small to base NBC's large viewer numbers on.

I for one don't want a creepy smartphone application listening passively for theme music.  Because if it is listening for anything, it's listening for EVERYTHING. But forget my privacy for a second.  What about corporate infosec concerns?  Let's suppose for a moment that the Symphony software isn't being installed surreptitiously (and if it isn't yet, it will be).  If one of your employees installs the software on their phone to become part of some paid panel and then brings the device to work, what are the ramifications?  Hard to say without first understanding how the Symphony software works.

New technologies (and even new classes of technologies) are inevitable.  At Rendition Infosec, we advised clients that they needed policies surrounding Google Glass while it was still in early Beta.  Without policies concerning passive monitoring "always on" technologies, organizations place themselves at huge risk for inadvertent data exposure.  Without a policy governing the authorized use of the devices, nothing is against the rules and anything goes.  Picture the wild wild west.

Before we allow these technologies in the workplace, we should have a clear understanding of how data is retained and how data is transmitted (at a minimum).  Sooner or later, any passive monitoring device is certain to hear something you wish it hadn't.

Employee education is key in keeping these technologies out of our workplaces.  If your security awareness training (BYOD is an infosec issue) doesn't cover this, it should.  Consider updating them as needed.  As always, use these sorts of events to start the discussion about infosec concerns in the workplace.  The days of sitting on the sidelines until retirement are long gone.

Wednesday, January 13, 2016

Using firewall backdoors to rekindle defense in depth discussions

If you haven't been under a rock for the last month of so, you know about the Juniper backdoor that was built into many of their firewall products.  At this stage, it is pretty clear that the backdoor was planted maliciously.  I've already written about the Juniper backdoor here and here and my official opinion is that despite a lot of speculation, at least the backdoor password wasn't the work of NSA.  Despite that, Juniper has dropped Dual_EC_DRBG encryption from its products (probably a wise idea).
Fortinet Fortigate Firewall
Last week, we learned that another firewall manufacturer, Fortinet, discovered a backdoor in their code and patched it.  Working exploit code was published to the Full Disclosure mailing list here.

According to Fortinet's blog, the backdoor password was backdoor at all, but rather was an "management authentication issue."

Only older versions for the Fortigate Firewalls
The Fortinet marketing team must be winning some kind of award for spinning a yarn like that.  It's akin to Caesar's Palace telling me that my room keys were intentionally demagnetized to prevent unauthorized access to my room*.  Fortinet says that the "management authentication issue" code was not maliciously placed.
*As you probably know, I travel a lot. I sleep in a hotel bed at least twice as often as my own. Caesar's Palace has hands down the worst key system of any hotel I stay in.

Whether you believe Fortinet about malicious placement of the code or whether you think it was NSA that hacked Juniper, the recently discovered firewall vulnerabilities should be a reason for pause.  As infosec professionals we should be using these vulnerabilities to rekindle the discussion about defense in depth for our networks.

Most networks that we evaluate at Rendition Infosec resemble a piece of candy.  They have a hard crunchy outside and a soft gooey inside.  Once attackers breach the perimeter, they often move around the network with impunity.  Defense in depth is about more than running antivirus and having a perimeter firewall.  In fact, nobody has called that the standard for defense in depth in the last decade.

We need to evaluate what would happen if an attacker can bypass the firewall at will - or worse yet control it.  Because that's exactly what successful use of the backdoor passwords would do.  Either would allow the attacker a privileged place in the network, sitting on the very device that is supposed to protect the network.

Is your network one backdoor away from total compromise?
Can your network really survive this sort of attack?  Could you detect an attacker moving laterally from the firewall itself?  Is your network instrumented to even detect this sort of attack?  In most networks I've evaluated, the answer is unfortunately no.  However, it shouldn't be.  Especially in the wake of increasingly stiff regulatory fines for data breaches, organizations should be asking themselves how they can detect and prevent attacks that involve compromised edge devices such as firewalls, routers, and even VPN concentrators.  Failing to be able to detect these attacks probably won't save the organization from a lawsuit or regulatory fine after a data breach.

My recommendation is that organizations begin conducting sand table exercises to ensure that they understand how they will respond to various incidents.  Sand table exercises help uncover systematic weaknesses in a network before they are exploited.  After all, if your defenses are broken on paper, you're not ready for a penetration test.   If you need help building and conducting sand table exercises, give me a shout.  I've built and executed many sand table exercises for small and large organizations alike.  We have several configured to discuss compromises of perimeter devices and get your organization thinking about its defense in depth strategy.

Tuesday, January 12, 2016

FTC on encryption software: meet standards or expect a fine

The FTC has levied a $250,000 fine on the manufacturer of dental support software that tracked personal health information (PHI) over its nonstandard encryption.

The FTC claimed that the software provider falsely claimed that it provided a level of encryption that was consistent with standards and would keep data safe as required by HIPAA.  But the FTC alleges that the company didn't actually offer AES encryption, which has long been the "standard" recommended by NIST.  According to the FTC, the failure to offer the advertised "industry standard" level of encryption was deceptive.

There’s a lot to be gained by examining this from an infosec angle.  At Rendition Infosec we always advocate encryption for our customers, particularly when dealing with sensitive or regulated data.  The dental software specifically dealt with HIPAA regulated data and as such should have been protected.  And users of the software probably thought it was.  But as the FTC notes, the software did not use AES as recommended by NIST standards.

Organizations who adopt software where claims of encryption are made should ask what types of encryption are used.  "We use proprietary encryption" is a HUGE red flag.  "Our encryption algorithm is very difficult to explain" is another such red flag.  Even when vendors claim to use AES, audits should be performed to determine whether:
  1. The encryption actually is AES
  2. The encryption is implemented correctly
Last year Rendition audited commercial software that protected PHI for a customer (unfortunately, I can't name the vendor or client due to NDA).  We found that the software did use AES.  However, the encryption key for the data was static across all installations of the software.  If the attackers had access to one installation of the software (available as a demo) they had the keys to decrypt all data encrypted with the software in any installation.

World readable keys (or keys hardcoded into world readable executables) are other common implementation mistakes we see at Rendition.

As a software vendor, the FTC settlement should send shivers down your spine.  Honestly, how often does the marketing team stretch the truth a little when describing a product's capabilities?  Probably more often than we’d care to admit.  This FTC ruling shows that flippantly saying "yes, of course we adhere to industry standards" can cost you big if you actually don’t.

If you doubt that marketing staff might stretch the truth a little too far, look no further than the idea of "negative day threat protection."  A few years ago at BlackHat, when everyone else was focusing on zero day exploit mitigation, $vendor was advertising "negative day" threat protection.  Despite talking with them, we never did figure out how they were pulling this off...  Later, they defined what they meant the term "negative day" to mean, but that's a thought for another time.

Monday, January 11, 2016

Former Yandex Employee Tries to sell source code

There's news that a former Yandex (leading Russian search engine) employee tried to sell source code  for the search engine.  This is a really interesting case because there are presumably not many organizations that would be interested in the Yandex algorithms.  Other search engines already have their own algorithms and probably wouldn't learn much from analyzing those used by Yandex.  Some of the most valuable information a search engine has is it's data.  The search algorithm itself would likely be secondary in value to a rival.
But those trying to manipulate search engine results would be very interested in the Yandex source code for the purposes of manipulating search results.  Armed with the Yandex source code, attackers could artificially increase their position in search rankings to attract more traffic.  This would be a gold mine for exploit kit operators who can infect more victims with higher search rankings.

Know your adversary - or at least have an accurate threat model
Threat Intelligence has become a big focus for us at Rendition Infosec and one of the first questions you need to ask when assessing risk is "who would benefit from compromising our networks."  At first glance, it would appear that a rival would benefit from the search engine code, but careful analysis suggests that competitors are just as likely to benefit than someone who wants to manipulate search rankings.  Understanding who your potential adversaries are goes a long way towards a good risk assessment methodology.

Infosec Hooks
As for infosec hooks, there are at least three beyond just Threat Intelligence.  The insider clearly was able to exfiltrate source code from the corporate environment.  While a lot of infosec pros today really hate DLP software, I tend to like it.  I like the fact that it can easily* detect certain patterns of information when they are transiting the network.  Most hatred for DLP systems comes from business lost due to false positive detections (which often manifest as blocks).  But these normally point to poorly tuned rule sets on the DLP systems themselves.  Even DLP implemented in logging only mode can be useful in detecting insider data exfiltration.
* Where "easily" may be defined differently depending on whether you have implemented an SSL decryption solution.

The second infosec hook has to do with outsourcing.  The suspect in the case tried to sell the prized source code for the equivalent of $25k.  Most of our US based threat models don't look for people to sell for so little, we tend to think values of six figures and up (which tends to limit the pool of potential buyers).  But we must adapt our threat models to take into consideration the economies of the locations where we outsource.

Finally, even though the perpetrator was found guilty of trying to sell corporate secrets for profit, he was only sentenced to two years probation.  Again, in the US we tend to think of criminal prosecution as an effective deterrent to corporate espionage.  But this is not the case in every country, some of which do not even have hacking laws.  When outsourcing any portion of our operations, we must ensure that we understand the effect of any deterrents to selling our corporate secrets and adjust threat models appropriately.  Two years probation would hardly be an effective deterrent in the US for the attempted sale of corporate secrets for $25,000.  Again - different economies yield different threat models.

Friday, January 8, 2016

There's no hacking in baseball - or is there?

The former scouting director of the Cardinals Major League Baseball (MLB) team Chris Correa plead guilty to hacking charges.  I'm reminded of the famous line in "A League of Their Own" where Tom Hanks says "There's no crying in baseball."  I can just about hear Tom Hanks saying the same for hacking - "There's no hacking in baseball."  Alas, this plea deal clearly would prove that inaccurate.

The charges for the guilty plea
There's so much fun here, I really don't know where to start.

When one of the Cardinals employees (Jeff Lunhow) decided to leave for employment with the Astros, he was told to turn his Cardinals owned laptop over to management.  He was also told to provide the login password for his laptop.  So far, this is pretty normal (with the possible exception of having to provide his password for access to files).

Come on man... Password reuse again?!
Things get interesting when Luhnow commits the mortal sin of password reuse as he moves to the Astros, a competing team.  The Astros maintained an online system for storing proprietary (and valuable) scouting information that they called Ground Control.  At some point the URL for the Ground Control became publicly known.  Correa used this URL along with a derivative of Lunhow's Cardinals laptop password to access the Ground Control system using his identity.

After Correa accessed the Ground Control system he may have leaked this data to the press.  In any case, Astros' proprietary data was leaked to the media and this spawned an investigation by the FBI.  According to the plea deal, Correa also illegally accessed another Astros employee's email.

Sentencing
Correa faces up to 5 years jail time and $250,000 for each of the five counts he plead guilty to.  Since he was charged with 12 counts, pleading guilty to only 5 may seem like a walk in the park if the other 7 are dropped.  It is unclear whether there were special sentencing recommendations negotiated as part of the plea deal, but part of the plea agreement stipulates that Correa cannot appeal the sentencing decision.  Sentencing is scheduled for April 11, 2016.

We have nothing anyone would hack us for.
How often do hear this at Rendition Infosec?  Unfortunately, far too often.  And I'm sure we're not alone. The Astros are not an IT company.  They are a major league sports team.  You can imagine that their proprietary systems weren't getting much love from IT and infosec.  The charges indicate that a derivative of Lunhow's Cardinals password (known to Correa) was used for the Astros' Ground Control system.  It seems unlikely that auditing was in use on the system or the failed logins generated while Correa tried different derivatives of Lunhow's password would have been discovered. 

We also need to discuss with employees that they absolutely cannot use passwords (or derivatives) that they used at a previous employer.  This is especially important if the former employer is a competitor.  This is particularly difficult to audit for since we can't expect employees to turn over their passwords used at former employers.  But it needs to be part of our user awareness education nonetheless.

Finally, the sentencing in this case is something we'll definitely be keeping a close eye on.  This is a cut and dry hacking case where the guilty party sought and obtained financial gain for hacking a competitor.  While the Astros' losses are hard to quantify, the loss of their proprietary information was definitely costly.  It will be interesting to see how this case is sentenced and what sort of precedent that sets.

Time Warner Cable (TWC) customers warned of possible breach

If you are a Time Warner Cable customer, your data may have been leaked.  This is according to news reports that up to 320,000 customers' data may be at risk.  But details seem to be lacking.  Let's dig into this a little.


The Notification
Time Warner Cable claims they were notified by the FBI that some emails and passwords "may have been compromised."  It isn't clear at this point how the FBI was alerted to this or how many customers the FBI is able to confirm were compromised.

Who is to blame?
TWC claimed that it wasn't them, stating
Our understanding is that the compromise had nothing to do with TWC's systems or processes
Well bravo there TWC.  But how could TWC possibly know this so quickly?  Probably due to their impeccable incident response skills.
TWC has found no evidence of a breach in its systems that operate and secure email accounts for our customers.
Any time such claims are made so hastily, consumers have the right to be skeptical.  While I have no doubt that the statement is honest, it is like saying that by examining the skin on my arm, I have found no evidence of liver cancer.  It's an honest statement, but suffers from a criminally incomplete analysis.  But PR firms know this and will decline to offer additional information that might spook customers until there is ironclad proof that they were at fault.

Should I reset my password?
If you have to ask, you probably haven't been reading this blog long.  Of course you should.

What lessons should we be taking away?
First, TWC needs a better PR team.  While this hasn't been entirely FUBAR'd, it definitely could have been handled better.  Customers are being informed of the breach via email and snail mail.  But headlines have already broken out and they have lost the ability to control the narrative.

Second, TWC can do a better job disclosing specifics around why they believe the breach had nothing to do with their internal processes.  There has been some spin about the possibility of phishing.  While that is certainly possible, it doesn't seem likely with 320,000 accounts impacted.  If TWC failed to notice 320,000+ phishing messages delivered to their customers then wow did they ever fail.

Finally, examine the processes with which you entrust customer data (and what types of data you entrust) to your third party partners.  When conducting security reviews with Rendition Infosec, this is an area where we find policies are often insufficient.  If you have third party partners who have access to (or create copies of) customer data, this incident is a great excuse to review those policies and procedures to ensure that they will insulate you in the event a third party is breached.  Remember, they are your customers and are unlikely to care whether you lost the data or one of your partners did.  If indeed this is a third party breach, why did the third party have access to passwords in the first place?  Why did anybody?

Update: Twitter user Marc Pretico informed me of something I should have already known. "Time Warner" (not breached) and "Time Warner Cable" (disclosed breach) are not the same company, yet I was using the two interchangeably.  Post was updated to address this discrepancy. Thanks for the correction.

Thursday, January 7, 2016

Did FBI really "crack TOR?"

Reports have surfaced that the FBI cracked TOR to bring down more than 1000 people involved in child pornography.  After the FBI was able to seize the illicit website "Playpen" and was then chose to host it for approximately two weeks on its own servers.  Details as to how the FBI was able to do this are a little sketchy, but previously techniques developed by CMU have been thought to be critical in unmasking the TOR hidden server.


The FBI then used a Network Investigative Technique (NIT) to reveal data about site visitors and "crack TOR."  While some sites are reporting this as revolutionary, it probably isn't.  Realistically, this could be something that could have been done with the open source code that Tim Tomes published with HoneyBadger.  

At Rendition Infosec, we like to remind clients that every time you visit a website, you have to trust that they will not attempt to compromise you.  This is especially true if the site requires the browser to run active content.  This is why as infosec professionals we consistently tell users not to click on untrusted links.  But what then constitutes a "trusted" link?  After the FBI seized the Playpen server it was no longer trusted, even though users had no way of knowing this. 

The FBI essentially pulled off a watering hole attack.  A watering hole attack is one where the adversary compromises a website used by a specific target population and then uses it target that population by delivering malware.  Watering hole attacks are nothing new and have been reported on since at least 2012.  They've been reported on for years.  In one recent case, the Forbes site was reportedly compromised in an attempt to exploit users in the financial services industry.  There's no good defense against a watering hole attack: once you trust a site, you generally continue to trust the site.

Wednesday, January 6, 2016

Submarine plans and communications hand delivered due to hacking fears

I saw this a few months ago while teaching in Australia, then forgot about it until I was chatting with a friend and it came to mind as an example.  In Australia, the $20 billion contract for the next generation submarine fleet is big news.  It's big news in Germany, France, and Japan too - countries who are in the final round of bidding for the job to build the subs.

But reportedly (and not surprisingly), China and Russia are also interested in the submarine plans.  Germany's contractor said they are receiving 40 hacking attempts per night.  It is of course unclear what they consider to be a "hacking attempt" but it is clear that the hacking attempts are on the radar of the executives at the ship builders.

Perhaps the most interesting development in the story is the report that due to increasing hacking attempts, the organizations involved are resorting to hand delivery of sensitive data.  At Rendition Infosec, we always recommend that organizations have out of band communications for use in incident response.  The rationale is that if the attackers have compromised your mail server (or other in-band communications) you don't want them listening in on your conversations during the incident.  It appears that the Australian government has taken this to a whole new level with hand carried documents and communications.

What can we learn from this as infosec professionals?  First, Australia has publicly set a precedent for extreme caution which we can cite if needed.  While I'm sure this has been done before, good public examples never hurt.  Second, we can use this as an example of possible overreaction to hacking fears.  If the reports are true, there were probably other measures that could have been used to secure communications that didn't rely on hand carrying.  If conducting a sand table exercise, I'd ask how much inefficiency this would introduce into the process and ask business stakeholders to assign a dollar value to that.  Security is always a cost center and we need enable the business to operate safely, while still operating.

Tuesday, January 5, 2016

Comcast security systems vulnerable by design - but what should they change?

If you follow me on Twitter, you know I have a love/hate (mostly hate) relationship with my ISP Comcast.  The problem is that in my rural area, I simply lack another choice.  Last November, Comcast was in the news for leaking private details of home wifi access points.  But this morning I saw that Comcast is in the news this time for their faulty security systems. 

Possible replacement sign?
Researchers from Rapid7 discovered that the wireless alarm system fails open by design.  One of the selling points for the Comcast home security system is that it is wireless and hence easier to install. But ease of installation comes with a price.  The Comcast home security system, like many other wireless security systems, uses the ZigBee protocol for communications between the remote sensors and the control panel.  Researchers found that if they could jam the signal between the sensor and the control panel the alarm wouldn't activate.  They also discovered that in some cases it took up to three hours for the remote sensor to synchronize with the control panel.  During this time, the sensor was not being actively jammed, but was still ineffective at sounding the alarm.

When we work with customers at Rendition Infosec, one of the design decisions we always tell them to consider is whether to have their security solutions fail open or fail closed.  There's no consistently correct answer as to which method is best.  If you are protecting classified information, failing closed is clearly the correct answer.  If you are providing lifesaving information to a doctor to treat a patient, failing open is probably the correct answer - the loss of information can always be mitigated, the loss of life less so.

In Comcast's specific case, it's hard to say what the correct answer is.  Should the alarm activate if the remote sensor loses communication with the control panel?  Perhaps this is the case in some high security applications.  But let's be fair, you probably should wire a security system in if your application is high security enough to warrant that.  In a wireless environment, imagine the number of potential false positives you could have.  The number of those false positive events is likely to increase in densely populated areas (apartments, town homes, etc.) which is precisely the target market for the "no wiring" security solution Comcast is peddling.

All in all, while I do find the research disturbing from a security sense, I wouldn't recommend that the alarm systems should fail closed by default.  The high number of false alarms would likely render the systems useless (or unused) anyway.  What Comcast should however seek to correct immediately is the amount of time that it takes for a sensor to re-establish communications with the control panel/base station.  I think anyone would agree that three hours is simply too long for this process to take.

Finally, this is another great case of "what's the worst that can happen" when adopting a product.  While the products probably tested fine in a lab under normal use, they are clearly vulnerable to trivial tampering in the real world.  Comcast is likely opening itself to legal action providing these vulnerable solutions if they do not openly disclose the vulnerabilities to current and future customers.