Pages

Saturday, August 10, 2013

Daily Blog #48: Saturday Reading 8/10/13

Hello Reader,
            It's Saturday! Hooray! The week is over and fedex pickup ends earlier today meaning you either have extra time in the lab or a some time at home. Either way, get some coffee and lets get our forensic reading going.

1. Joachim Metz has updated his volume shadow specification paper, not this week bu recently enough that I didn't read it until this week. If you are at all curious about how the volume shadow service data structures are stored then read this for what I believe to be the most detailed guide outside of whatever internal team at Microsoft developed it. In addition if you care more about the usage of volume shadow copies in your analysis and the existence of unallocated space in VSC's you should read this paper he presented which will answer questions you didn't even know you had.

2. Did you read yesterday's blog? No? Oh well we had another Forensic Lunch with David Nides, Kyle Maxwell, Joseph Shaw and the fine fellows I work with at G-C Partners. Tune in and keep up with what I think was a great hour of forensic discussion.

3. Andrea London has posted the slides for her talk at DefCon http://www.strozfriedberg.com/wp-content/uploads/2013/08/DefCon-2013.pdf tilted 'The Evidence Self Destructing Message Apps Leave Behind'. Her talk covers a wider base of these applications than I've seen covered before and it's a good read as she and Kyle O'Meara go deep into the file system internals and network traffic exchanged.

4. Lenny Zeltser posted a nice retrospective of how teaching Malware Analysis has grown, http://blog.zeltser.com/post/57795714681/teaching-malware-analysis-and-the-expanding-corpus-of. It's a nice short read and reinforced the idea that his advice remains the same 10 years later:
  • Too many variables to research without assistance
  • Ask colleagues, search Web sites, mailing lists, virus databases
  • Share your findings via personal Web sites, incidents and malware mailing lists

5. If you are doing USB device forensics and have a Windows 8 system that Woanware's USB Device Forensics application does not support yet then check out TzWork's USB Storage Parser. So far its the only tool that I have that take the multiple Windows 8 USB artifacts and combines them to a single report of activity.

6. Hal Pomeranz put out a new Command Line Kung Fu entry this week, http://blog.commandlinekungfu.com/2013/08/episode-169-move-me-maybe.html, always a good read.

7.  On an earlier Forensic Lunch you may have heard Rob Fuller talk about anti-forensic hard drive custom firmwares. Going more into that topic here is a great article about Hard Drive hacking and showing how these firmware changes are researched, implemented and performed. If you are dealing with an advanced subject you might want to be aware of these new possibilities! http://spritesmods.com/?art=hddhack

8. In this week Forensic Lunch we talked about parsing carved binary plists. For those of you looking to implement your own parsers or just try to understand the format better here are two sources. The first is the OSX code for binary plists, http://opensource.apple.com/source/CF/CF-550/CFBinaryPList.c, and a great write up on plist forensics by CCL http://www.cclgroupltd.com/images/property%20lists%20in%20digital%20forensics%20new.pdf.

That's all I have for this Saturday Reading. I hope these links are enough to get you through your day. Tomorrow is Sunday Funday and I have yet another challenge waiting for you to solve. This week we will have 'winners choice' where the winner can pick from a free ticket to PFIC or a year license to AccessData's Triage tool!

Friday, August 9, 2013

Daily Blog #47: Forensic Lunch 8/9/13

Hello Reader,
Going to try something different today and see if I can embed our Forensic Lunch live stream in the blog!

Forensic Lunch is something we are trying to do every Friday where we talk about updates to research from around the community as well as our challenges and successes here in the G-C Lab. If all goes well you can watch the show either love or recorded in the embedded Youtube below!



Tomorrow is Saturday Reading and I have some good articles and papers to pass on and don't forget Sunday for our weekly forensic contest!

9th August 2013 | The Singapura Images


9th August 2013.
It is Singapore's 48th Birthday today.
I didn't manage to finish the originally planned 48 images for National Day but here is a glimpse of what I have done and played around for the past 2 months : First there was transportation in Singapore.




Then it was food, with the Gordon Ramsay thing...






The vintage travel posters:



Then there was the "Tin Tin" series:




It was SAF Day and there were the NS related stuff.




Pok Pok & Away!

Daily Blog #46: Understanding the Artifacts USBStor

Hello Reader,
               No time to finish my Gmail code review so I'm going to continue the understanding the artifacts posts to keep things going. I got some good responses yesterday from the prolific Joachim Metz regarding what he's seen in User Assist keys which I updated the post to include. The more we share our knowledge with each other the better picture we have of whats true and whats possible, so if you see something you feel is missing please let me know and I'll incorporate it!

USBStor

Most of us doing forensics are familiar with the USBStor key, we look to it to identify USB devices plugged into a system and identify the make, model (unless its generic) and serial number (as windows reports it)  of the device. USBstor also has at least two sister keys IDE (for physical disks) and SBP2stor (for firewire) all of which serve the same purpose. This is one of the first registry artifacts many examiners are made away of as what USB external storage devices have been attached is so important to most investigations. Many times I'm asked as I've stated in the prior post, 'Is the computer logging this to track us? Did the NSA request this feature?'. The answer is, as far as I know, no.

Instead the USBStor and its sister keys are all related to a convenience mechanism to the user that is greatly appreciated. It associates a known device to its loaded driver! Without these keys every time you inserted an external device (USB, eSATA, Firewire in this example), the system would have to look up the driver to load it, check to see if it has the driver and load it. Instead thanks to the caching of known device to driver pairs the device quickly comes up each subsequent plugin.

You might ask, well why does it not stop keeping knowledge of devices after so many days. The answer that its more inefficient to check and expire registry keys and then just recreate them again in the future if the device is plugged in rather than just store it since hard drive space is no longer a premium.

This understanding can help you to explain odd scenarios. For instance lets say a generic USB device was plugged in (many white labeled devices do not identify a specific manufacturer) and from its name you cannot determine what kind of device it was, storage or connectivity of some kind (CDROM, Phone, MP3 player that does not expose its file system). You can look at the driver loaded to determine what functionality Windows made available to the custodian and how the custodian could have made use of it on this system.

It's this kind of deeper understanding that will lead to better explanations, testimony and fact finding. I hope you look to understand deeper and let me know if you think there is functionality that i'm missing in the comments below!

Thursday, August 8, 2013

Bistro@Changi | Planes, Memories and Makan


29th July 2013.
Pasir Ris, Simei and Tampines will bring back memories for those who have been to Pulau Tekong. But for a perm staff like me, Changi Village was an equally sentimental place where I used to go for nights out or food after booking out from camp.


One of my favourite places besides Changi Village Food Centre’s Ipoh Hor Fun was Bistro@Changi where I had company cohesions, birthdays, the once-in-a-while treat after a tough week, reunion and ORD during my time on Tekong.
It is a relatively exclusive little eatery located at Changi Beach Park where you could enjoy the cooling sea breeze (straits actually) and watch planes descending upon Changi International Airport. For a plane spotter, this is the place to be where you can enjoy food, drinks and do plane-watching.
If possible I would normally start off with a bowl of their signature Mushroom Soup (5.90 SGD). A well-balanced creamy mushroom soup with comes with a slice of garlic bread.

Changi Bistro - Mushroom Soup

My favourite item on the menu is their Flame Grilled Chicken Chop (15.90 SGD) formerly “Hickory Chicken Chop” but still the same. It was the good old tender boneless chicken chop marinated with a tasty hickory barbeque sauce.


Still on their menu after 4 years is the Sambal Fish, grilled dory topped with spicy (very spicy) sambal sauce and served with buttered rice.

Changi Bistro - Ultimate Nachos

However, my new favourite on their menu has got to be their New Zealand Lamb (19.90 SGD), a beautifully grilled “to perfection” leg of lamb served with a delicious peppery sauce.

Changi Bistro - New Zealand Lamb

If you just want something to chill out such as snacks or light bites. Their Ultimate Nachos (8.00 SGD) is still on the menu. Crispy nachos served hot and topped with mozzarella cheese and jalapeno chilli with a salsa dip at the side, a perfect comfort food to spend the evening with some nice drinks from the bistro’s bar.

Changi Bistro - Ultimate Nachos

Otherwise, you can go local with the Bistro’s Satay (11.90 SGD). It may be pricey for a satay at 1.90 per stick but these are some really good satay comparable with Chuan Kee’s at Old Airport Road.

Changi Bistro - Bistro's Satay

Currently they are figuring out a name for a special cocktail. I shall just call it the “Changi Calamansi” for now. It is a cocktail with lime and sour plum which is very refreshing and rather sweet. They have two versions. Version 2 has a stronger punch of alcohol while version 1, my preference is more enjoyable for relaxing atmosphere along Changi Beach.

Changi Bistro - "Changi Calamansi"

I would like to come back here again for their Tom Yum Mussels or Barramundi in Thai Sauce if I do visit Changi Village again or cycle there from East Coast. The place is unpretentious, the food is decent, the ambience is top-notch (if you don’t mind alfresco dining) and the place holds great sentimental value for me.

Changi Bistro - Woof!

Bistro@Changi
260 Nicoll Drive
Changi Beach Carpark 1
Singapore 498991

Operating Hours
Mon - Thurs             12pm - 11pm
Fri, Sat                   12pm - 1am
Sun                       10am - 11pm

How to Get There:
Bus Services : - 89, 19, 9
Alight at Changi Beach CP 2 Bus Stop.

Special thanks to Brandon for the invitation back to this memorable place!

Moving the Little House a Little Means a Lot - Iman Park

The cottage is about 94 years old and it has a new story to tell. They moved it about 130 feet. It already has a Facebook page: At the Collective part of the new Krog Street Market. This was Tuesday morning, August 6, 2013.

The move was as much fun as an architecture tourist can have. In the process the movers, contractors, developers, and sidewalk superintendents felt an unexpected camaraderie. We were all smiles at the end and the good cheer has lasted me for 36 hours.

Why'd they move it? Why didn't they just tear it down?


It's the only house on that side of the block but I doubt many noticed.

Mapview-before-2013-08-06-Cottage-at-723-Lake-Ave-Moved-to-corner-of-Waddell-Street-for-Krog-Street-Market-3
Folks lived there up until about a year ago; Google street view still has a picture. Real estate sites say it's 1,080 square feet, built in 1920. The "1920" is probably wrong.

It's cute but it was in the way, almost a victim of the Atlanta BeltLine which made the development of Krog Street Market possible. For you non-Atlantans: The BeltLine is really big deal.

In most circumstance they'd have torn it down. Instead, it's a great preservation story.

IMG_2878
That's because it's in Inman Park and Inman Park is strong and strict. And it's because folks are more preservation minded these days. Aren't they?

IMG_2795  Cottage at 723 Lake Ave. to be moved to the corner of Waddell Street for Krog Street Market
It was in the way, smack in the middle of the space. The "Inman Park overlay Historic District" said they couldn't mess with it.

IMG_2794  Cottage at 723 Lake Ave. to be moved to the corner of Waddell Street for Krog Street Market
Then at a meeting of developers and city planners someone wondered, "Can we move it?"

IMG_3063-2013-08-05-Cottage-at-723-Lake-Ave-to-be-moved-to-corner-of-Waddell-Street-for-Krog-Street-Market
They laid foundations at the corner of Lake Street and Waddell.

 IMG_3087-2013-08-06-Cottage-at-723-Lake-Ave-Moved-to-corner-of-Waddell-Street-for-Krog-Street-Market
They hired Roy Bishop House Movers from Stockbridge and went to work.


This is what they had to do.


The fiddled and rocked and tweaked and brought it home.

IMG_3113-2013-08-06-Cottage-at-723-Lake-Ave-Moved-to-corner-of-Waddell-Street-for-Krog-Street-Market
This was "the man" John Kinard, owner of Roy Bishop House Movers. During the move he was calm, quiet and serious. When it was done, he smiled, chatted, iPhoned, and headed out to the next job.

He told me it is well built, that if they'd braced it wrong they'd have broken it in half. He said there was a hidden chimney and if they hadn't found it and braced it properly it might have been trouble.

IMG_3117-2013-08-06-Cottage-at-723-Lake-Ave-Moved-to-corner-of-Waddell-Street-for-Krog-Street-Market
The Paces Properties folks were certainly happy, another milestone on the way to opening Krog Street Market.

IMG_3132-2013-08-07-Cottage-at-723-Lake-Ave-moved-to-corner-of-Waddell-Street-for-Krog-Street-Market
Now for some TLC.

Mapview-before-2013-08-06-Cottage-at-723-Lake-Ave-Moved-to-corner-of-Waddell-Street-for-Krog-Street-Market-1
The is the corner of  Lake and Weddell before, thanks to Google street view.

IMG_3139-2013-08-07-Cottage-at-723-Lake-Ave-moved-to-corner-of-Waddell-Street-for-Krog-Street-Market
The cottage at its now home looking fine, it went from invisible to anchor.

I took way too many pictures and videos of the move.



View Larger Map

Auto insurance and pizza delivery

We get a lot of calls from parents -- and usually those calls are after the fact, unfortunately -- about whether their child delivering pizzas needs additional auto coverage.

Sorry, but the answer's usually yes. Most personal auto insurance policies won't cover you if you're getting paid to use your own car to transport people or property for business purposes.

In general, you'll need to buy a business or commercial auto insurance policy if you are a health care worker who occasionally uses your own car to take clients to appointments. The same is true if you use your own car to deliver flowers, newspapers, pizzas, etc.

If you have questions about your coverage -- and policies do differ -- contact your agent or insurance company directly.

Lovebox Festival (& deodorant challenge)!

I�m not normally one for festivals � my phobia of people being sick stops me doing a lot of things and going to festivals is one of them.  However, Sure recently got in touch with a challenge � to try out their new Maximum Protection deodorant in a situation where you are likely to get rather hot & sweaty.  I chose two tickets to Lovebox festival to see a couple of my favourite artists live and it just so happened that some of my best friends also went along also! 

995443_10151697825062171_1357521311_n
My favourite girls in the world! Zoe, Emma and Lily!

74675_10151697827612171_2139257619_n28754_10151697835312171_1254039353_n
Jumping/Action shots � which definitely tested the deod!

033

Lily�s vlog of our day!

The Festival: I saw two of my current favourite musical obsessions, Josef Salvat (thanks Andrew you babe for coming to watch him with me), and Aluna George.  We sipped rose, ran around, ate pizza, jumped for photographs and claimed our spot of grass in between five massive flags.  After the festival I made my way straight out to the Barfly in Camden to see Andrew perform with his band Whisky Jax and did even more dancing there.   The day was pretty perfect and has definitely re-ignited my love of festivals. 

Lily also took her camera along on the day (most of the photographs above are hers, thank you!) and we didn�t realise but the camera was on a mode which filmed little clips before we took a photograph.  So once she was home she put together a little vlog of our day (you can see me preparing to take a selfie � cringe!) you can watch that above too!

The deodorant: Now I am already a massive fan of this deodorant and have been using it religiously for the past year, however they have bought out a new scent and so I was looking forward to giving it a try.  The product works best when applied before bed as that's when your body temperature is at its most consistent so it can really get working.  The morning after applying it was the day of the festival.  I had a shower (and shaved � note this is important, as this surely means that I�m getting rid of the deod?).  I gave myself another sweep of the product before heading out at 12pm and I can safely say that I didn�t sweat an inch, despite the hot weather and festivities.

I had an incredible (sweat-free) weekend and would do it all over again if I could!  Thanks Sure for the opportunity!

What�s your favourite deodorant and festival?

xxx

Wednesday, August 7, 2013

Daily Blog #45: Understanding the artifacts: User Assist

Hello Reader,
              Turns out Gmail is very complicated so I need more time to parse through the javascript and css to find the right code that is rendering the array of emails to view-able text. If you've already done this feel free to leave me a note in the comments below or via email dcowen@g-cpartners.com. So to buy myself some time I am going to fill in with a blog series I plan to interject through the year called 'Understanding the artifacts'.

If you remember from the the milestone series I talked about the importance of understanding now only what an artifact means but why its created, in these posts I will go into detail on what I understand the original intent of these data structures are. If you understand why a developer create an artifact that you rely on you can better predict not only what data should be stored in it but what other artifacts may exist.

This post will focus on the 'User Assist' artifact. There are alot of good posts that explain how to interpret the User Assist registry keys, such as http://windowsir.blogspot.com/2007/09/more-on-userassist-keys.html. http://www.4n6k.com/2013/05/userassist-forensics-timelines.html,  http://sploited.blogspot.com/2012/12/sans-forensshic-artifact-6-userassist.html,  http://forensicsfromthesausagefactory.blogspot.com/2010/05/prefetch-and-user-assist.html and http://forensicartifacts.com/2010/07/userassist/ are just a few examples of the dearth of information available on what it contains, how to parse it and how to interpret it. What most posts fail to address is why is it there at all?

Most times when someone first gets introduced to digital forensics their first thought is 'my computer is spying on me!'. This may seem to be true but the facts are much more simple, the developers who created the operating system and applications you rely on want to give you the best experience possible. In trying to create a good experience they want to make it easy for you to access the documents and programs you use the most.

The User Assist key was created to fulfill one purpose, to populate the start menu list of recently executed programs so you can quickly load them again. This is why it tracks the last time of execution, the full path to the executable and the amount of times the program has been executed. All so when you click on the start button a dynamically sorted list can show the approximately 15 (excluding the possibility the user pinned an application) programs that the user executes most frequently.

In order to be more efficient the developer decided not to limit the amount of entries that could be stored in the User Assist key as you don't want false statistics if a program drops off for a couple months and then gets frequent usage again. For instance the user went on vacation and started playing games daily and not executed Microsoft Word when the user goes back to work the start menu would only display games and not his work tools if the developer limited the number of entries rather than just storing all of them and shorting by number of executions and time of last execution.

This is also why there are two sets of registry keys for User Assist one for program execution and the other for shortcut execution as they are displayed at different points to the user.

Joachim Metz points out there can be more than two though:
" There can be more than 2. I've seen at least 3 different UserAssist subkeys on XP and Vista, and about 8 different ones on Win 8."
Each separate subkey should be divided by purpose, it will be interesting to see for Windows 8 what they are.

So what can we learn from this?

1. We can debunk the idea that something is 'spying' on the user
2. We can explain to clear terms why an artifact is created to a judge and jury
3. We can explain that these artifacts exist by default and have to exist unless disabled and the functionality disabling it removes
4. We can predict what data should be contained within it

I'll see if I can get my code review done this evening and continue the Web 2.0 forensics series tomorrow.


Health insurance questions: Preventive colonoscopies and polyps

Until fairly recently, when consumers had routine preventive colonoscopies, they often faced a substantial bill for surgery if a polyp was discovered and removed during the procedure. But current guidelines from the U.S. Department of Labor, under the Affordable Care Act, protect consumers from these extra charges for polyp removal.
Q5: If a colonoscopy is scheduled and performed as a screening procedure pursuant to the USPSTF recommendation, is it permissible for a plan or issuer to impose cost-sharing for the cost of a polyp removal during the colonoscopy? 
No. Based on clinical practice and comments received from the American College of Gastroenterology, American Gastroenterological Association, American Society of Gastrointestinal Endoscopy, and the Society for Gastroenterology Nurses and Associates, polyp removal is an integral part of a colonoscopy. Accordingly, the plan or issuer may not impose cost-sharing with respect to a polyp removal during a colonoscopy performed as a screening procedure. On the other hand, a plan or issuer may impose cost-sharing for a treatment that is not a recommended preventive service, even if the treatment results from a recommended preventive service.
In addition, the federal guidelines help people with a family history that put them in a high risk group for certain diseases. They will now be able to get more frequent preventive care without additional costs.
Q7: Some USPSTF recommendations apply to certain populations identified as high-risk. Some individuals, for example, are at increased risk for certain diseases because they have a family or personal history of the disease. It is not clear, however, how a plan or issuer would identify individuals who belong to a high-risk population. How can a plan or issuer determine when a service should or should not be covered without cost-sharing? 
Identification of "high-risk" individuals is determined by clinical expertise. Decisions regarding whether an individual is part of a high-risk population, and should therefore receive a specific preventive item or service identified for those at high-risk, should be made by the attending provider. Therefore, if the attending provider determines that a patient belongs to a high-risk population and a USPSTF recommendation applies to that high-risk population, that service is required to be covered in accordance with the requirements of the interim final regulations (that is, without cost-sharing, subject to reasonable medical management).
If you're having problems with your health insurer over these sorts of issues and you live in Washington state, feel free to contact our consumer hotline at 1-800-562-6900 or email us

Daily Blog #44: Forensic Tips - Shadow Access

Hello Reader,
              I'm going to take a break today from the web 2.0 series for two reasons. 1. I'm not ready to write up the next post yet until I've reviewed the rest of the javascript that is parsing the message headers and contents we talked about last week. 2. A method I've been using for shadow access apparently isn't well understood and if it saved time in my lab it will save time in yours. Also as a reminder we are doing another Forensic Lunch this friday 8/9/13 where we talk about new updates in our research and answer forensic questions from you guys.

To get notified when the Youtube viewing link becomes available click here: https://plus.google.com/u/0/events/c9gklmj2cjhfdou01fjlhskcgkk

If you want to talk about your research on the Forensic Lunch give me an email and I'll invite you to the video chat room, dcowen@g-cpartners.com

Accessing shadow copies in Windows from SIFT:

Now if you have been following Joachim Metz's updates to libvshadow you would see there is now a native version for Windows. There are some steps you have to take to get this to compile that you can find here:
https://code.google.com/p/libvshadow/wiki/Building

You need to build it in Windows using cygwin or Visual Studio and get a third party package called dokan located here: http://dokan-dev.net/en/

Now this takes a bit of time and some experience with compiling code and if you go the Visual Studio route knowledge of Visual Studio, Joachim has given a great tutorial but I've still met people who have had issues with it. So if you want access to all the system files we talked about that are stored in the shadow volumes that aren't available to you using vssadmin/api routes, such as the $mft, $logfile, $usn journal and more, then I'll give you an easy work around.

Step 1. Download SIFT http://computer-forensics.sans.org/community/downloads
Step 2. If you don't already have vmware workstation/vmware player then download it from www.vmware.com
Step 3. If your image is a multipart e01, aff, etc.. then mount your image using ewmount/affmount first to make it appear as single raw image
Step 4. Use vshadowmount to mount the single raw image, whether whole or virtual and this is where the key step is. When you do this step pass in an extra option: -X allow_other as seen below:
vshadowmount -X allow_other  /mnt/
Step 5. Point FTK Imager to a image file located on \\siftworkstation\ and add each volume shadow you want to extract data from.

You can see Joachim's mounting instruction page here which references this fact:
https://code.google.com/p/libvshadow/wiki/Mounting

but what this not clearly spell out is that if you don't clear that option from fuse.conf you will not be able to allow non root users access to the mounted directory. Allowing non root users is necessary for how i'm using SIFT/libvshadow for is exposing the mounted shadows to Windows. Not allowing non root users affects your ability to let CIFS expose the mounted shadow copies to other networked machines. This network share access to mounted volume shadow copies in Linux what I do to speed things along on machines I don't have the native windows libvshadow compiled on, or where dokan fails to compile.

I mount with vshadhowmount -X allow_other and then I point FTK Imager to the \\siftworkstation network shares that it exports by default and access the shadow copies as raw images in FTK Imager to export out the system files not exposed with the native Linux NTFS driver.

Hopefully this is helpful and in the near future all our tools will adapt enough where we don't have to do this, but until then this works 100% of the time for me when all else fails.



Tuesday, August 6, 2013

Daily Blog #43: Sunday Funday Winner 8/5/13

Hello Reader,
      Another Sunday Funday is behind us and some more great answers were given, thanks to everyone who submitted on Google+ and anonymously! I've learned from this week challenge that I need to be a bit more specific to help for more focused answers, I'll make sure to do that for next weeks challenge. This week Eric Zimmerman turned in a great answer sharing the win with Jake Williams.

Here was the challenge:
The Challenge:     SInce we are giving away a copy of Triage, lets have a question related to manually triaging a system.
For a Windows XP system:
You have arrived onsite to a third party company that is producing a product for your company. It is believed that one of the employees of the company has ex-filtrated the database of your customers information your provided for mailing and processing sometime in the last 30 days, While the third party company is cooperating with the investigation they will not allow you image every system and take the images back to your lab. However, they will allow you to extract forensic artifacts to determine if there is evidence of ex-filtration present and will then allow a forensic image to be created and taken offsite.
With only forensic artifacts available and a 32gb thumbdrive what artifacts would you target to gather the information you would need to prove ex-filtration?

Here is Eric Zimmerman's winning answer:
Since this is a triage question, the goals are to get as much info in as short a time frame as possible. the idea is to cast as wide a net into a computers data as possible and intelligently look at that data for indicators of badness.
i am not going to include every key, subkey, querying lastwrite times/value and how to decode things from the registry or otherwise mundane details. these steps should be automated as much as possible for consistency and efficiency anyways.
the first thing i would do is interview management at the company to find out what kind of usage policies they have: are employees allowed to install whatever software they want? any access controls? who has rights to where? What kind of database was my customers stored in? who has rights to that database? and so on
i would also ask management who their competitors are and then locate their web sites, domain names, etc.
once i had the basic info i would assemble a list of relevant keywords (competitor names, relevant file extensions, etc). i would also look specifically for tools that can be used to connect to the database server and interact with it. this of course changes depending on which database it is (mysql i may look for putty or other terminal programs, oracle = the oracle client, sql server = that client, LinqPad, etc.)
with that basic info in hand i would triage each computer follows:
1. collect basic system information such as when windows was installed, last booted etc.
2. check running processes for things like cloud storage (dropbox, skydrive, teamviewer, other remote access tools)
3. look for any out of the ordinary file shares on the computer that can be used to access the computer from elsewhere on the network
4. check MRU keys for network shares, both mapped and accessed via command line
5. dump DNS cache and compare against keyword lists
6. dump open ports and compare against a list of processes of interest.
are any remote access tools running? file sharing?
7. Look to see what data, if any, is present on the clipboard. are there any suspicious email addresses or the text of an email or other document? what about a file or a list of files?
8. unpack all prefetch files and see what applications have been executed recently (certainly within the last 30 days, but expand as necessary). again we key in on processes of interest, etc
9. look at all the installed applications on a computer and specifically those installed within the last 30 days
10. dump a list of every USB device ever connected to the machine including make, model and serial #. also reference, when available, the  last inserted date of the device. cross reference this list with any issued thumb drives the company provided from interviews. make a note of any drive letters devices were last mounted to. also process and cross reference setupapi.log for devices connected within the last 30 days.
11. dump web browser history for IE, FireFox, Chrome, and Safari and look for keywords, competitor URLs, etc. hone in on last 30 days, but look for keywords thru entire history in case things were initiated previous to the data being exfil'ed. look for hits against cloud storage, VNC, and similar.
12. dump web browser search history including google, yahoo, youtube, twitter, social networks, etc and again filter by last 30 days with keyword hits across all date ranges. Also look for references to file activity such as file:///D:/somePath, etc.
13. dump passwords for browsers (all of them), mail clients, remote access tools, network passwords (RDP, etc). are any webmail addresses saved by the browsers?
14. dump keys from registry including CIDSizeMRU, FirstFolder, LastVisitedMIDMRU, LastVisitedMIDMRULegacy, MUICache, OpenSavePidlMRU, RDP sessions, RecentDocs, TypedPaths, TypedURLs, UserAssist, appcompatcache and of course ShellBags. all of these keys should be checked for keyword hits as before. specifically, look for any USB
15. Look for instant messaging programs and chat history for skype to include who they are talking to, if any files were xfered, and so on.
16. look for any p2p programs that could have been used to xfil data.
17. search the file systems for such things as archives, shortcut files (lnk), evidence eliminator type programs, drive and file wiping programs, etc. cross reference any lnk files with paths used by USB devices and shellbags to get an idea of what kinds of files were kept on any externally connected devices. look inside any archives found (zip, rar, tar, 7zip, etc) for any keywords of interest (like a text file containing my customers). filter based on MAC dates for files and of course look for keyword hits.
18. look at event logs for relevant entries (what is relevant would be determined by how the computers are configured. what kind of auditing is enabled by the network admins, etc). things like remote access and logins, program execution, etc would be key here.
19. time permitting, and based upon the results from above, use a specialized tool to unpack restore points and look for files as outlined above (lnk files, programs installed, etc)
20. look in the recycle bin for files (hey, ive worked plenty of cases where the incriminating evidence was in there!)
21. dump ram and run a quick "strings" against the binary, then look for keywords. going crazy with volatility is beyond triage, so this will suffice.
depending on where the database lives i would triage that system in the same way (if windows based) but if its mysql on linux or something i would review bash history files, sign ins, FTP logs, etc for signs of data being ex-filed. i would look at the database log files for logins and, if available, sql statements executed, errors, etc from the last 30 days.
finally i would ask about and review any web proxy logs or other logging systems the company has to look for suspicious activity.
all of this data would be automatically added to a timeline that could then be used to further narrow in on interesting periods of activity on each system.
with all the data collected i would want to start looking for default export names or extensions, keyword hits, and whatnot. the machines that have more indicators would go up on my list of machines to want to image. machines with little to no indicators would be removed from consideration.
ShellBags are going to be a key artifact in this case because they contain sooo much good data on Win XP. what other files were on any external devices connected to the systems? do i see the presence of "hacking" tools, ftp clients, putty, etc? are there folders or files indicative of my data or any of my competitors?
32GB is more than enough space to triage all the computers found at the business as there isnt a ton of need to copy files off the computer.
now all those steps are a heck of a lot to do manually (and several of them would be near impossible to do by hand), so in my case i would just run osTriage on each computer and it would pull all that info (and more) in a few seconds. add a bit of time to review the results and i would know which machines i wanted to image for a more thorough review.
with that info in hand i would most likely already know who exfi'led the data, but i would still request an image be made of each machine where suspicious activity was found.
(all of those steps could be further unpacked, but since this is a triage based funday question my response is kept in true triage style, fast and just enough of a deep dive to hone in on computers of interest).

However, Special Agent Zimmerman cannot accept the prize. So Jake Williams hard work in his winning answer seen below wins the prize of a year license of AccessData Triage:
 
What artifacts would you look for across multiple Windows XP machine with only a 32GB USB drive to hold them all?
So we think that an evil user exfiltrated a database we provided to the business partner.  Because of the verbiage, we’re working under the assumption here that they were provided with an actual database file (.mdb).
Great. That probably wasn’t bright. In the future, we should NOT provide the business partner the database file and rather provide secure and AUDITABLE access to the data.  This seems like a good idea. There are other issues here, such as revocation of access and even keeping the current data picture (including opt outs for example) that further reinforce why this is better than a file. So we should definitely provide auditable access to the DB in the future, not a database file.
For this writeup, I’ll focus on evidence of execution, evidence of access, and then touch on potential evidence of exfiltration.  Here’s why: under the best of circumstances, we can have a hard time finding evidence of exfiltration. But these aren’t the best of circumstances. 
1. We have no information about how the partner may have exfiltrated the data.  
2. We have limited space in which to collect our data for further probable cause.
We’re really looking for suspicious activity on the machines that will open the door to full images for a complete investigation.  For that reason, we have to keep the scope small and limit it to that which will cover the most ground.
Evidence of execution:
So the first thing I want is access to prefetch files on all the machines.  This is my first stop.  If the user exfiltrated the database AND we have a DLP solution in place, they may need to encrypt the file first. I’d want to look for rar.exe, winzip.exe, or 7z.exe to look for evidence of execution of those utilities. Also, we’re looking for evidence of execution of any anti-forensics tools (commonly used when users are doing illegal stuff).  As a side note here, I’ve performed forensic investigations where I’ve found stuff like wce.exe or other “hacking tools” in prefetch.  In at least one particular case, this discovery was not part of the investigation specifically.  However, the fact that we highlighted it bought us a lot of good will with the client (since this was an indicator of a compromise or an AUP violation).
We’d want to know if the users used any cloud services that aren’t explicitly allowed by policy. For example, Dropbox, SkyDrive, GoogleDrive, etc. would be interesting finds.  While use of these services doesn’t necessarily imply evil, they can be used to exfil files.  Evidence of execution for any of these services would provide probable cause to get the logs from the devices.  For those who don’t know, this is a real passion of mine.  I did a talk at the SANS DFIR Summit looking at detecting data exfiltration in cloud file sharind services and the bottom line is that it isn’t easy. Because of the complexity, I expect criminals to use it more.  Those logs can contain a lot of information, but grabbing all logs in all possible user application directories might be too broad (especially given the 32gb USB drive limitation).  We’ll just start small with Prefetch. 
I’d also want to get uninstall registry keys (HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall). My thoughts here are that 32GB is so little data for an enterprise that I’d be looking for evidence of programs installed that may have been used to read the data from the database or exfiltrate the data.  Again, this is so little data that we can store it easily.
UserAssist registry keys from all users would also be on my shopping list.  If the company uses a domain (and honestly what business doesn’t) this will be easier if roaming profiles are enabled.  We want to pull from these two keys for windows XP:
▪ HKEY_USERS\{SID}\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{GUID}\Count\
▪ HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{GUID}\Count\
Where GUIDs are usually {75048700-EF1F-11D0-9888-006097DEACF9} or {5E6AB780-7743-11CF-A12B-00AA004AE837}
 Again, I’m focusing on evidence of execution because space is tight. These entries won’t cover everything that was executed, generally it only includes items opened via Explorer.exe (double click).  Also, the entries are ROT13 encoded, but that’s easily overcome. Because it is possible that users deleted data, we might also want to grab UserAssist from NTUSER.DAT files in restore points.  This might be pushing the limit of my storage depending on how many machines our target has to triage (and how many Restore Points they each have).
Evidence of Access:
In this category, I’d be looking at MRU keys for Access.  Now these change with the version of MS Office, but a good starting point is to look in these subkeys in the user’s profile (where X.X is the version):
• Software\Microsoft\Office\X.X\Common\Open Find\Microsoft Access\Settings\Open\File Name MRU
• Software\Microsoft\Office\X.X\Common\Open Find\Microsoft Access\Settings\File New Database\File Name MRU
• Software\Microsoft\Office\X.X\Access\Settings
Locating our filename doesn’t prove anything, presumably we gave it to them to open, but it gives us a start.
If we know that the file was placed on a network share with auditing enabled, we want to identify who had access to that share using the records in the Security event log.  If auditing wasn’t enabled, we may still be able to find evidence of failed logon attempts to the share in the event logs on the file server.  Successful connections to the share may be found be in the MountPoints2 (Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2) key so we want to grab that from user’s profiles.  Of course, it goes without saying that just because someone mapped a share doesn’t mean they even read our file (let alone exfiltrated it).
Event logs:
Depending on the event logs available, we may be able to tell if a user has accessed the database via an ODBC connector.  Usually users just open an Access file, but they could add it as an ODBC data source.  I don’t have my systems available here at DEFCON to do testing, but if the file was added as an ODBC source, there should be some remnants left over to locate.  But often there will show up in event logs. We want to check event logs for our database file name.
Possible Evidence of Exfiltration:
Firewall logs are another item I’d collect.  Yes, I know some people will laugh at me here, but we are looking for data exfiltration and that may have happened over the network.  If we have some idea of where the data was exfiltrated to, firewall logs, if enabled, are a useful source of information.  Fortunately for our case with only a 32GB USB drive for the whole network, the logs capped at 4M by default.  This allows us to collect a lot of them without taking up lots of space.  We could get logs from 100 machines and only consume 4GB of our space.
Setupapi.log is another file I’d like to collect.  This log shows first insertion time for USB devices (a common exfiltration point).  While this log can’t tell us if a file was copied to a USB, analyzing setupapi.log files over an enterprise can show patterns of USB use (or misuse).  Correlating that with information with their security policy may yield some suspicious behavior that may be probable cause for further forensic images.
If there are other logs (from an endpoint protection suite) that log connections, I’d want to see if I could pull those as well.  While we’re at it, we’d want to filter event logs (particularly application event logs) for detection notices from the AV software.  What we are looking for here is to determine if any of the machines in scope have had infections since we turned over our database file.  We can filter by the log provider and we probably want to eliminate startup, shutdown, and update messages for the AV software.
If I had more space, I’d grab index.dat files from profile directories.  Depending on the number of systems and profiles, we’d probably run out of space pretty quickly though.  What we’re looking for here are applications that may use WinInet APIs and inadvertently cache information in index.dat files.  This happens sometimes in malware and certainly data exfiltration applications might also fit the bill.  However, my spidey-sense tells me that these index.dat files alone from many profiles/machines could exhaust my 32GB of space.
Parting thoughts:
Forensics where we rely on minimal information is a pain.  You have to adapt your techniques and triage large numbers of machines while collecting minimal data (32GB in this case).  I’d like to do more disk forensics and build timelines. I might even use the NTFS triforce tool.  If this were a single machine we were performing triage on, then my answer would certainly involve pulling the $USNJrnl, $LogFile, and $MFT files to start building timelines. The SYSTEM, SOFTWARE, and NTUSER.DAT hives on the machine would also be on my short shopping list.  However, over the multiple machine I believe the scenario covers, this just isn’t feasible in the space we’ve been given.

I'll follow up this contest with how I approached this case in real life in a later blog post. I will say that in my case the first thing I did was triage which systems showed access to the database itself to create a pool of possible ex-filtraters. Then I went back and started pulling the data discussed in our two winning answers! From there I was able to discover enough suspicious activity and patterns of access to the underlying data through the userassist, shellbags and lnk files to get approval to create a forensic image.

Tomorrow we continue the web 2.0 forensics series as I look to see when I should stop and move on and then come back to it later with other services besides Gmail.