24 Sep

Volkswagen and their emission cheating software

Everyone these days is talking about Volkswagen and how they made a software that cheated in vehicle emission tests. Volkswagen’s stock price is tanking, CEO has been asked to resign, EU bureaucrats are looking into it and other major engine manufacturers are being investigated as well.

Let me give my opinion on all this affair.

How did they do it?

Quote from EPA violation notice sums it up well enough:

The ‘switch’ sense whether the vehicle is being tested or not based on various inputs including the position of the steering wheel, vehicle speed, the duration of the engine’s operation, and barometric pressure. These inputs precisely track the parameters of the federal test procedure used for emission testing for EPA certification purposes. During EPA emission testing, the vehicles’ ECM ran software which produced compliant emission results

So, they added a piece of code to vehicles’ ECU block that was able to detect testing mode and then adjust engines’ operating parameters. It’s very similar to what ECU tuning shops do, except Volkswagen did it to reduce emissions in certain cases and petrolheads do it to achieve best possible performance from their cars.

Whose decision was it?

Some dude on hackaday sees a big ethical issue here:

An engineer, either in Volkswagen or less likely at a subcontractor, signed off on code that would defeat the entire purpose of EPA and Clean Air Act regulations. Someone with the authority to say ‘no’ didn’t, and this code was installed in the electronic control unit of millions of cars.

Say what?

This dude apparently knows nothing about how corporations work. There is no way in hell that some engineer came to his boss and said: “Hey, I just figured out a way to cheat in USA emission tests, do you think it will be useful for our company?”.

No. Fucking. Way.

I’m convinced that this decision came from the middle management and was passed down to engineers. Something like: “We don’t care how you do it, just make sure our diesel engine passes those tests. Just don’t tell us how you managed that.” Plausible deniability, you know.

However, dude from hackaday is absolutely right in another aspect – some engineer will likely lose his job over this. It’s not because he did something wrong, it’s because the company needs a scapegoat. Just like they sacrificed Chief Executive Martin Winterkorn – CEO had nothing to do with a scandal, it’s just one of those steps company needs to do to make a good PR.

How did they get caught?

As strange as it sounds, they got caught by accident. International Council on Clean Transportation (ICCT) wanted to convince European bureaucrats to implement strict US standards for diesel emissions in EU. So, they hired West Virginia University’s Center for Alternative Fuels, Engines and Emissions (CAFEE) to run tests in the field. And as interim director of CAFEE explains:

They rented VW diesels, measured their tailpipe emissions on the road and compared them to measurements on the same cars made in the lab. The discrepancies were huge.

So, the scientists made some presentations in 2014, published their research online, and nobody except USA bureaucrats cared about it. Until last week, that is.

Now suddenly everybody is acting as if the world is going to be destroyed by this.

So, how bad it really is?

Let me answer this question with a quote from the original EPA news release:

These violations do not present a safety hazard and the cars remain legal to drive and resell. Owners of cars of these models and years do not need to take any action at this time.

I’ll give you a moment to think about that.

482’000 cars in USA alone. 11’000’000 cars in the whole world. 5 years. Exceeding NOx limits 20 times. Affected cars are not a safety hazard. USA cities are not covered in black smog. In fact, nobody noticed anything for 5 years. What does it tell you?

To me, the answer is simple – those NOx limits are fucking bullshit. They make your car more expensive and reduce horsepower of your engine. They don’t save the planet. They are there because some bureaucrat needs to justify his puny existance in some environmental agency.

Don’t get me wrong – I do care about the environment. But you are not helping the environment much by limiting already small emissions of NOx. Instead, you should rather look at the Asia and their industrial practices. For example, burning down forests in Sumatra – which produce so much smoke that the entire Singapore city (being 80 kilometers away from Sumatra!) has its air quality deteriorating to a “very unhealthy” range. Or look at the half of China’s rivers which are polluted with industrial waste and fertilizers. Now, that is something that actually needs fixing!

To sum it all up

Volkswagen knew these regulations are bullshit and won’t save the Earth. They knew their engines can’t pass them. So, they had balls big enough to give all bureaucrats the finger and cheated their way through.

I say – good for them! In my scorebook it’s “Volkswagen 1, Bureaucrats 0″.

23 Sep

Why do most antiviruses suck?

Mandatory disclaimer – all views in this article are my own and in no way represent views of my employer or my coworkers.

Last few weeks I noticed several gposts about antiviruses, False Positives and how bad the situation is. For example, this essay from atom0s and this complaint (reg required) by mudlord. And then there is this epic rage by evlncrn8. :)

To understand why antiviruses work this way, you need to consider plenty of factors. So, let’s take a quick look.

Why make antiviruses?

It usually starts with a group of skilled guys wanting to save the world. They make a great product, people like it, company makes some money, more people like the product, company grows even more and so on..

But as company grows, priorities change. The bigger and more popular the company gets, the more managers and investors it attracts. Those guys usually have no clue about technology behind antivirus. And they don’t care about technology, they only see numbers and dollar signs everywhere.

And then the primary goal of company changes to making profit for shareholders.

What’s with the UI?

Let’s face it – readers of my blog are not the usual antivirus users. Antiviruses are used by everyone – from extremely skilled IT geeks to Granma Millie living in the retirement home. And this causes second biggest problem – big companies cannot make product just for skilled IT geeks, as nobody else will be able to use it. You can’t make a product for the average user either. You need to make something that even Granma Millie can use.

And that’s why most software products in recent years get dumbed-down – managers think that they need to do “inclusive designs” – so even the most retarded of users can use the product.

New shiny features.

One of the most common complaint I hear is that all antivirus products are becoming a huge bloatware. There are several reasons for that. First, product managers just don’t know any better.They look at all competitors – if Company A has feature X, you need to have feature X, no matter if it actually makes sense or not. Second reason is that company somehow needs to sell new version of product. You can’t say – this version is the same as the old one, we just changed colours and moved buttons around. No, you need to have something like “New version, now with features Z and Q!”

It’s not the best way but it’s certainly the easiest!

AV reviews and tests.

When you are purchasing a new car, you probably search for the reviews online. You probably do the same when you decide to move to new city, plan your vacation or make any other big decision. That’s just normal.

And it’s the same with antiviruses – most people will either get a recommendation from someone they trust, or they’ll search for reviews online. So, the companies need to invest a lot in PR and make sure their product looks good in tests and reviews.

Testing methodologies most of the times are not representative of any real-life experience of ordinary users. Testers take whatever pieces of malware they can find and test AV products against them. They don’t distinguish between different types of malware, sample prevalence or geographical distribution.

I’m sure you feel much safer knowing that your antivirus protects you against a worm that is distributed only through Chinese QQ messenger, or that very nasty banker attacking only Brazilian banks. Don’t you?

To test False Positive rate, testers check number of files from popular download sites like CNET, Softpedia or PCWorld, or collected from European SMB companies. Of course, AV companies do the same thing and try to make sure they have no false positives on those sites. But if you’re a small software dev and distribute your software using other means, or don’t target SMB companies – well, bad luck. False Positive on your file doesn’t influence test results. :)

It’s a load of crap – but every company is still doing it because lots of potential users rely on such “tests” before buying antivirus. Some companies even cheat in tests.

Automation and big data.

Number of new malware and other crap these days is increasing exponentially. According to McAfee Quarterly Threat reports, ~4 million new malware samples appeared in the Q1 2009, ~7mil in the Q1 2012, ~32mil in Q1 2014 and ~48mil in Q1 2015.

Think about it. How can you process 48’000’000 samples?

The answer is simple – automation, automation and more automation. Malware classification is hugely automated process. Does the file look weird? Does it do weird things? Was it sent out in a spammy email? Is it encrypted to prevent automated analysis? Was it protected using stolen Themida? Do other antiviruses think it’s bad? Game over, classified as bad!

Sure, sometimes some legitimate software gets classified as bad. In this scale, it’s bound to happen.

If automation is not able to classify file, malware researchers will need to analyze it manually. This is where big data software, statistical models and cluster analysis come in. They alert researchers to traffic anomalies, suspiciously similar thousands of files and other “interesting” stuff. Files get prioritized based on prevalence, number of users affected and other factors. And, of course, the bigger the issue, the faster it gets attention from a real human being.

So, if your legitimate software is classified as bad and it affects all your 50 users – it’s not because AV company hates you or your product. Really, they don’t hate you. They just don’t know you even exist. So, the sooner you let the AV company know about the problem, the sooner they will fix the issue.

But hiding your head in sand and saying “I don’t have to time to play a cat and mouse game with anti-virus companies” will get you nowhere.

Are we all doomed?

Think about the points I just made. Your product needs to bring company money. You need to make a product Granma Millie can use. Your product needs to behave well in tests. Given the requirements, no matter how skilled the developers and researchers are, the end product will be…

Well, it will be just like the product you’re getting now – dumbed-down, feature-bloated money-making piece of software that fares reasonably well in artificial tests.

You’re living in the era of globalization and money-making corporations. Deal with it.

31 Aug

Let’s say something good about Google Chrome

In my previous post I criticized Google’s decision to disable NPAPI plugin technology. I still think it was a bad decision. But today let’s talk about a change that should be an improvement for virtually all users.

Chrome will begin pausing many Flash ads by default to improve performance for users. This change is scheduled to start rolling out on September 1, 2015.

Source: https://plus.google.com/+GoogleAds/posts/2PmwKinJ7nj

Say what? Is Google going against ads? 8-) Well, not really. HTML5 ads are apparently OK. But those obnoxious Flash-based ads will become click-to-play.

The setting in question is located in Settings->Advanced->Content Settings->Plugins:
It has been present in Chrome for several months already. So, I’m guessing that Google will be only pushing out some configuration change, or change the default value for new installations. Who knows, as Google is not giving us any details at this point..

Google’s ad detection algorithm might need some improvements and there might be some other side-effects but overall I think it’s a great change! Good job Google, you made my day better! :)

21 Aug

Dancing pigs – or how I won my fight with Google Chrome updates

I think removing NPAPI support from Google Chrome was a really stupid decision from Google. Sure, Java and some other plugins were buggy and vulnerable. But there is a huge group of users that need to have NPAPI for perfectly legit reasons. Certain banks use NPAPI plugins for 2-factor authentication. Certain countries have made their digital government and signatures based on NPAPI plugins. And the list goes on.

I have my reasons too. If I have to run older version of Chrome for that, I will do so – and no amount of nagging will change my mind.

That’s a well known fact in security circles, named “dancing pigs“:

If J. Random Websurfer clicks on a button that promises dancing pigs on his computer monitor, and instead gets a hortatory message describing the potential dangers of the applet — he’s going to choose dancing pigs over computer security any day

Unfortunately pointy-haired managers at Google fail to understand this simple truth. Or they just don’t give a crap.

Hello, I am AutoUpdate, I just broke your computer

Imagine my reaction one day when my NPAPI plugin suddenly stopped working. It just wouldn’t load. It turned out that Google Chrome was silently updated by Google Update. It broke my plugin in the process and – officially – there is no way of going back.

What do you think I did next?

That’s right – I disabled Google Update from services, patched GoogleUpdate.exe to terminate immediately and restored previous version of Google Chrome from the backup. Dancing pigs, remember?

Your Google Chrome is out-of-date

It worked well for few months. But this week, Chrome started nagging me again.
Quick Google search lead me to this answer: you need to disable Chrome updates using Google’s administrative templates.

Let’s ignore the fact that the described approach works only for XP (for Windows 7 you need to use ADMX templates which you need to copy manually to %systemroot%\PolicyDefinitions) and now there are like 4 places related to Google Chrome updates in the policies.

So, I set the policies and it seemed to work. For a day.

Your Google Chrome is still out-of-date

Imagine my joy the next day when I saw yet-another-nagscreen. Like this:

No, I don’t need that update. Really!

I can close the nag, but 10 minutes later it will pop up again. And it looks like the only way to get rid of the nag is to patch chrome.dll. I really didn’t want to do that but dumb decisions by Google managers are forcing my hand here.

Reversing Google Chrome

Since Chrome is more or less open-source, you can easily find the nagware message:

From here, we can find which dialog is responsible for the nag:

From there we can find NOTIFICATION_OUTDATED_INSTALL which comes from UpgradeDetector. And finally we arrive at CheckForUpgrade() procedure:

This is what I want to patch! But how?

You could load Chrome DLL in IDA and try to find the offending call on your own. But I’m willing to bet that it will take you hours, if not days. Well, PDB symbols to the rescue!

Symbols for Chrome are stored at https://chromium-browser-symsrv.commondatastorage.googleapis.com and you will need to add that path to your _NT_SYMBOL_PATH. Something like this:

_NT_SYMBOL_PATH is a very complex beast, you can do all sorts of things with it. If you want a more detailed explanation how it works, I suggest that you read Symbols the Microsoft Way.

After that, you can load chrome.dll in IDA, wait until IDA downloads 850MB of symbols, and drink a coffee or two while IDA is analyzing the file. After that it’s all walk in the park. This is the place:

And one retn instruction makes my day so much better..

Final words

Unfortunately for me, this world is changing. You are no more the sole owner of your devices, all the big corporations want to make all the decisions for you.

Luckily for me, it is still possible to achieve a lot using a disassembler and debugger. And reverse engineering for interoperability purposes is completely legal in EU. :)

Have fun!

08 Jul

Fun with encrypted VBScript

In the post on malicious LNK file analysis, I mentioned that malicious LNK file contained encrypted VBScript. Of course I wanted to check the script, so I started Googling for tools to decrypt VBScript files.

Existing tools

There is some research about the used encryption algorithm and few tools as well. The most useful to me seemed article explaining the algorithm behind script encoder. Its author also has released several versions of his scrdec tool which is supposed to decrypt VBE files.

But, as it usually happens, publicly available tools tend to blow up in the most unexpected ways on non-standard scripts.

I’m gonna fix that!

Cause of the problems

All publicly available tools assume that the encrypted file will be using ANSI encoding. But it doesn’t have to! :) All the operations in vbscript.dll use wide chars and wide strings.

For example, here’s a simple encrypted VBScript that is saved as a “unicode” file: https://www.mediafire.com/?cd98w73v12fpdq7. This one you can actually open in notepad, save as ANSI and then decode using scrdec.exe.

But how about this one? https://www.mediafire.com/?xdj6dfxsrcgbdr1. You can open it in text editor and save as ANSI – but after that it will not work anymore and script decoders will fail as well. How about that? ;)

Writing my own script decoder

Taking into account these problems, I decided to write my own decoder in C#. C# seemed to be a logical choice as .NET uses wide strings internally and it has StreamReader class that can take care of ANSI/Unicode/UTF8 string encoding automatically – so that I can focus on the actual decryption algorithm instead.

Well, almost.

.NET Framework contains Base64 decoding functions. Unfortunately, VBE files use a nonstandard approach to Base64. So, I had to implement simple Base64 decoder from scratch, taking into account all the weirdness of the VBE files. But that’s a very simple thing to do.

First version of decoder took me few hours to make. And it worked nicely! :) But I still wasn’t satisfied because vbscript.dll disassembly showed me quite a few “weird” jumps. They were not taken in my examples but I was sure I could make up some test files that would force these jumps to be taken.

Oh boy, I rather wish I hadn’t done that..

Bug-features of VBScript.dll

It turns out that not only you can use wide chars, you can do some other really hairy stuff:

  • you can mix encrypted code blocks with plain VBS in one file;
  • and you can put several encrypted code blocks in one file;
  • if encrypted code block length cannot be decoded or is larger than the filesize, this block will be considered to be plain VBS;
  • if encrypted code block checksum cannot be decoded or doesn’t match the calculated one, this block will be considered to be plain VBS;

Having fun already?

It took me a few more hours to get my script decoder right. Now it should work almost the same way Microsoft’s vbscript.dll works. But I wonder how many AV engines got this implementation right? We shall test and see!

Testing AV engines

OK, OK, I didn’t really test AV engines. I just used VirusTotal to scan the files. This approach is really flawed (for many reasons that deserve a separate blog post) but it gives you some estimate how good or bad the situation really is.

I googled for some really common and easily detected VBScript malware. In few minutes I found this great example named aiasfacoafiasksf.vbs.

Let’s see what VirusTotal says for the original file:

File name: aiasfacoafiasksf.vbs
Detection ratio: 38 / 56
Analysis date: 2015-06-30 12:10:06 UTC

OK, that’s pretty decent. Should I try encrypting it using Microsoft’s screnc.exe?

Hmm, VT results are 24/55. Looks like some antiviruses don’t even know that VBScript can be encrypted..

Let’s see what happens when we start pushing the limits:

  • Converting VBE to unicode we get: 19/55. Wide strings? Who needs that, the entire world speaks English;
  • VBE file consisting of multiple blocks: 12/55. I know, for loop is hard to implement;
  • Mixing plain text and encrypted VBE blocks: 10/55. Nah, that lonely jmp will never be taken;
  • Mixing plain/encrypted blocks and using unicode: 7/55. I feel really protected now;

Please note, I haven’t modified a single byte in the plain-text representation of the malicious script. All I did is change the way the script is encoded.

Let’s see what happens when I add some comments to the malicious script. Again, these are cosmetic changes, I am not changing a single variable name or functionality of the script. Any AV engine that normalizes input data before scanning should still detect malware.

  • Plain VBS with extra comments: 14/55. Apparently they don’t normalize input data;
  • VBE with extra comments: 10/55. That’s kinda expected;
  • Extra comment containing VBE start marker: 10/55. Suddenly getting “BehavesLike.JS.Autorun.mx” from McAfee. Excuse me? Are you using PRNG to generate those detections?

OK, so adding comments to VBScript is not that bad. But what happens when we put it all together?

Add extra comments + one VBE start marker in comment + one encrypted VBE block + use unicode: 3/53.


What can I say? Apparently some AV companies love to talk about their script engine features – but they don’t implement these features properly. It would be extremely easy to detect most of my test files as malformed VBScript – as there is no way to generate such files using standard tools. But they don’t do even that. Considering all this misery, I will not release any code that could be used to create super-duper-awesome-FUD VBScript encoder – even though it takes less than an hour to make one.

Note – it took me few weeks to get this post from a working draft to a published version. I’m sure that current VirusTotal results will be different, as most AV companies have had chance to process script files and make signatures for them. ;) Also, any decent AV company should be able to get those samples from VirusTotal if they wish to fix their VBScript processing.

Have fun and keep it safe!

Useful links

Windows Script Encoder v1.0
My VBScript decoder: https://www.mediafire.com/?9pqowl05um4jums
I am not posting the source code but this is a non-obfuscated application made in C#. So feel free to use any decompiler you like..
Set of non-malicious VBE files demonstrating some edge cases of what you can do: https://www.mediafire.com/?bu3u62t858dn4id
Article in klaphek.nl, explaining the algorithm behind script encoder
scrdec.exe v1.8
scrdec.c v1.8
vbs_dec.pas by Andreas Marx

29 Jun

Linking OMF files with Delphi

Continuing the discussion about Delphi compiler and the object files.

Here is the OMF file template I made for 010 Editor: https://www.mediafire.com/?bkpbkjvgen7ubz1

Please note, it is not a full-featured implementation of OMF specification. I only implemented all OMF file records that are processed by Delphi 2007 compiler. So, next time you have a cryptic compiler error while trying to link OMF file in Delphi, you can take a look into your OBJ file and make an educated guess what’s causing the problem.

TL;DR version

In 95+% of cases you will encounter OBJ file that has unsupported segment name in SEGDEF record. And it’s a simple fix, too – you just need to use objconv.exe by Agner Fog and use -nr option to rename the offending segment. Something like this:

Next possible issue is exceeding the number of EXTDEF or LNAMES records – this can happen if you’re trying to convert a really large DLL file into OBJ file.

Finally, your OBJ file might contain some record type which is not supported by Delphi compiler at all. I’m not aware of a simple way to fix it, I would try using 010Editor and OMF template to remove the entire record.

If your problem is not caused from any of the above issues, please feel free to drop me a note – I’ll be happy to look into it.

Known limitations of Delphi compiler

This is a list of limitations I was able to compile and/or confirm. Some of them come from Embarcadero official notes and the rest I obtained by analyzing dcc32.exe.

SEGDEF (98H, 99H)

  • Not more than 10 segments – if number of segments exceeds 10, buffer overrun will probably happen.
  • Segments must be 32bits. Will cause “E2215 16-Bit segment encountered in object file ‘%s'”
  • Segment name must be one of (case insensitive):
    • Code segments: “CODE”, “CSEG”, “_TEXT”
    • Constant data segments: “CONST”, “_DATA”
    • Read-write data segments: “DATA”, “DSEG”, “_BSS”

    Segment with any other name will be ignored.


Not more than 50 local names in LNAMES records – will cause “E2045 Bad object file format: ‘%s'” error.


Not more than 255 external symbols – will cause “E2045 Bad object file format: ‘%s'”
Certain EXTDEF records can also cause “E2068 Illegal reference to symbol ‘%s’ in object file ‘%s'” and “E2045 Bad object file format: ‘%s'”

PUBDEF (90H, 91H)

Can cause “E2045 Bad object file format: ‘%s'” and “F2084 Internal Error: %s%d”


Embarcadero says that “LEDATA and LIDATA records must be in offset order” – I am not really sure what that means. Can cause “E2045 Bad object file format: ‘%s'”


Embarcadero says that “LEDATA and LIDATA records must be in offset order” – I am not really sure what that means. Can cause “E2045 Bad object file format: ‘%s'”


This type of record is unsupported, will cause immediate error “E2103 16-Bit fixup encountered in object file ‘%s'”


Embarcadero documentation says:

  • No THREAD subrecords are supported in FIXU32 records
  • Only segment and self relative fixups
  • Target of a fixup must be a segment, a group or an EXTDEF

Again I’m not sure what they mean. But there are lots of checks that can cause “E2045 Bad object file format: ‘%s'”


Accepted by compiler, but no real checks are performed.

LINNUM (94H, 95H)

Accepted by compiler, but no real checks are performed.


Accepted by compiler, but no real checks are performed.


Ignored by compiler.

That’s the end of the list. Any other entry type will cause immediate error “E2045 Bad object file format: ‘%s'” :)

Useful links

My OMF file template for 010Editor: https://www.mediafire.com/?bkpbkjvgen7ubz1
OMF file format specification.
The Borland Developer’s Technical Guide
Objconv.exe by Agner Fog
Manual for objconv.exe

19 Jun

Weirdness of C# compiler

I’ve been quite busy lately. I made OMF file template I promised few weeks ago, found a remote code execution vulnerability in One Big Company’s product and spent some time breaking keygenme by li0nsar3c00l. I’ll make a blog post about most of these findings sooner or later.

But today I want to show you something that made me go WTF..

I needed to see how the loop “while true do nothing” looks like in IL. Since C# compiler and .NET JIT compiler are quite smart and optimize code that’s certainly an eternal loop, I needed to get creative:

Nothing fancy, but C# & JIT compiler don’t track param values, so they both generate proper code..

Well, I thought it’s a proper code, until I looked at generated MSIL:

WTF WTF WTF? Can anyone explain to me why there are 2 ceq instructions?

I could understand extra nop and stloc/ldloc instructions, but that ceq completely blows my mind.. And it’s the same for .NET 2.0 and 4.5 compilers.