Pages

Thursday, August 8, 2013

Auto insurance and pizza delivery

We get a lot of calls from parents -- and usually those calls are after the fact, unfortunately -- about whether their child delivering pizzas needs additional auto coverage.

Sorry, but the answer's usually yes. Most personal auto insurance policies won't cover you if you're getting paid to use your own car to transport people or property for business purposes.

In general, you'll need to buy a business or commercial auto insurance policy if you are a health care worker who occasionally uses your own car to take clients to appointments. The same is true if you use your own car to deliver flowers, newspapers, pizzas, etc.

If you have questions about your coverage -- and policies do differ -- contact your agent or insurance company directly.

Lovebox Festival (& deodorant challenge)!

I�m not normally one for festivals � my phobia of people being sick stops me doing a lot of things and going to festivals is one of them.  However, Sure recently got in touch with a challenge � to try out their new Maximum Protection deodorant in a situation where you are likely to get rather hot & sweaty.  I chose two tickets to Lovebox festival to see a couple of my favourite artists live and it just so happened that some of my best friends also went along also! 

995443_10151697825062171_1357521311_n
My favourite girls in the world! Zoe, Emma and Lily!

74675_10151697827612171_2139257619_n28754_10151697835312171_1254039353_n
Jumping/Action shots � which definitely tested the deod!

033

Lily�s vlog of our day!

The Festival: I saw two of my current favourite musical obsessions, Josef Salvat (thanks Andrew you babe for coming to watch him with me), and Aluna George.  We sipped rose, ran around, ate pizza, jumped for photographs and claimed our spot of grass in between five massive flags.  After the festival I made my way straight out to the Barfly in Camden to see Andrew perform with his band Whisky Jax and did even more dancing there.   The day was pretty perfect and has definitely re-ignited my love of festivals. 

Lily also took her camera along on the day (most of the photographs above are hers, thank you!) and we didn�t realise but the camera was on a mode which filmed little clips before we took a photograph.  So once she was home she put together a little vlog of our day (you can see me preparing to take a selfie � cringe!) you can watch that above too!

The deodorant: Now I am already a massive fan of this deodorant and have been using it religiously for the past year, however they have bought out a new scent and so I was looking forward to giving it a try.  The product works best when applied before bed as that's when your body temperature is at its most consistent so it can really get working.  The morning after applying it was the day of the festival.  I had a shower (and shaved � note this is important, as this surely means that I�m getting rid of the deod?).  I gave myself another sweep of the product before heading out at 12pm and I can safely say that I didn�t sweat an inch, despite the hot weather and festivities.

I had an incredible (sweat-free) weekend and would do it all over again if I could!  Thanks Sure for the opportunity!

What�s your favourite deodorant and festival?

xxx

Wednesday, August 7, 2013

Daily Blog #45: Understanding the artifacts: User Assist

Hello Reader,
              Turns out Gmail is very complicated so I need more time to parse through the javascript and css to find the right code that is rendering the array of emails to view-able text. If you've already done this feel free to leave me a note in the comments below or via email dcowen@g-cpartners.com. So to buy myself some time I am going to fill in with a blog series I plan to interject through the year called 'Understanding the artifacts'.

If you remember from the the milestone series I talked about the importance of understanding now only what an artifact means but why its created, in these posts I will go into detail on what I understand the original intent of these data structures are. If you understand why a developer create an artifact that you rely on you can better predict not only what data should be stored in it but what other artifacts may exist.

This post will focus on the 'User Assist' artifact. There are alot of good posts that explain how to interpret the User Assist registry keys, such as http://windowsir.blogspot.com/2007/09/more-on-userassist-keys.html. http://www.4n6k.com/2013/05/userassist-forensics-timelines.html,  http://sploited.blogspot.com/2012/12/sans-forensshic-artifact-6-userassist.html,  http://forensicsfromthesausagefactory.blogspot.com/2010/05/prefetch-and-user-assist.html and http://forensicartifacts.com/2010/07/userassist/ are just a few examples of the dearth of information available on what it contains, how to parse it and how to interpret it. What most posts fail to address is why is it there at all?

Most times when someone first gets introduced to digital forensics their first thought is 'my computer is spying on me!'. This may seem to be true but the facts are much more simple, the developers who created the operating system and applications you rely on want to give you the best experience possible. In trying to create a good experience they want to make it easy for you to access the documents and programs you use the most.

The User Assist key was created to fulfill one purpose, to populate the start menu list of recently executed programs so you can quickly load them again. This is why it tracks the last time of execution, the full path to the executable and the amount of times the program has been executed. All so when you click on the start button a dynamically sorted list can show the approximately 15 (excluding the possibility the user pinned an application) programs that the user executes most frequently.

In order to be more efficient the developer decided not to limit the amount of entries that could be stored in the User Assist key as you don't want false statistics if a program drops off for a couple months and then gets frequent usage again. For instance the user went on vacation and started playing games daily and not executed Microsoft Word when the user goes back to work the start menu would only display games and not his work tools if the developer limited the number of entries rather than just storing all of them and shorting by number of executions and time of last execution.

This is also why there are two sets of registry keys for User Assist one for program execution and the other for shortcut execution as they are displayed at different points to the user.

Joachim Metz points out there can be more than two though:
" There can be more than 2. I've seen at least 3 different UserAssist subkeys on XP and Vista, and about 8 different ones on Win 8."
Each separate subkey should be divided by purpose, it will be interesting to see for Windows 8 what they are.

So what can we learn from this?

1. We can debunk the idea that something is 'spying' on the user
2. We can explain to clear terms why an artifact is created to a judge and jury
3. We can explain that these artifacts exist by default and have to exist unless disabled and the functionality disabling it removes
4. We can predict what data should be contained within it

I'll see if I can get my code review done this evening and continue the Web 2.0 forensics series tomorrow.


Health insurance questions: Preventive colonoscopies and polyps

Until fairly recently, when consumers had routine preventive colonoscopies, they often faced a substantial bill for surgery if a polyp was discovered and removed during the procedure. But current guidelines from the U.S. Department of Labor, under the Affordable Care Act, protect consumers from these extra charges for polyp removal.
Q5: If a colonoscopy is scheduled and performed as a screening procedure pursuant to the USPSTF recommendation, is it permissible for a plan or issuer to impose cost-sharing for the cost of a polyp removal during the colonoscopy? 
No. Based on clinical practice and comments received from the American College of Gastroenterology, American Gastroenterological Association, American Society of Gastrointestinal Endoscopy, and the Society for Gastroenterology Nurses and Associates, polyp removal is an integral part of a colonoscopy. Accordingly, the plan or issuer may not impose cost-sharing with respect to a polyp removal during a colonoscopy performed as a screening procedure. On the other hand, a plan or issuer may impose cost-sharing for a treatment that is not a recommended preventive service, even if the treatment results from a recommended preventive service.
In addition, the federal guidelines help people with a family history that put them in a high risk group for certain diseases. They will now be able to get more frequent preventive care without additional costs.
Q7: Some USPSTF recommendations apply to certain populations identified as high-risk. Some individuals, for example, are at increased risk for certain diseases because they have a family or personal history of the disease. It is not clear, however, how a plan or issuer would identify individuals who belong to a high-risk population. How can a plan or issuer determine when a service should or should not be covered without cost-sharing? 
Identification of "high-risk" individuals is determined by clinical expertise. Decisions regarding whether an individual is part of a high-risk population, and should therefore receive a specific preventive item or service identified for those at high-risk, should be made by the attending provider. Therefore, if the attending provider determines that a patient belongs to a high-risk population and a USPSTF recommendation applies to that high-risk population, that service is required to be covered in accordance with the requirements of the interim final regulations (that is, without cost-sharing, subject to reasonable medical management).
If you're having problems with your health insurer over these sorts of issues and you live in Washington state, feel free to contact our consumer hotline at 1-800-562-6900 or email us

Daily Blog #44: Forensic Tips - Shadow Access

Hello Reader,
              I'm going to take a break today from the web 2.0 series for two reasons. 1. I'm not ready to write up the next post yet until I've reviewed the rest of the javascript that is parsing the message headers and contents we talked about last week. 2. A method I've been using for shadow access apparently isn't well understood and if it saved time in my lab it will save time in yours. Also as a reminder we are doing another Forensic Lunch this friday 8/9/13 where we talk about new updates in our research and answer forensic questions from you guys.

To get notified when the Youtube viewing link becomes available click here: https://plus.google.com/u/0/events/c9gklmj2cjhfdou01fjlhskcgkk

If you want to talk about your research on the Forensic Lunch give me an email and I'll invite you to the video chat room, dcowen@g-cpartners.com

Accessing shadow copies in Windows from SIFT:

Now if you have been following Joachim Metz's updates to libvshadow you would see there is now a native version for Windows. There are some steps you have to take to get this to compile that you can find here:
https://code.google.com/p/libvshadow/wiki/Building

You need to build it in Windows using cygwin or Visual Studio and get a third party package called dokan located here: http://dokan-dev.net/en/

Now this takes a bit of time and some experience with compiling code and if you go the Visual Studio route knowledge of Visual Studio, Joachim has given a great tutorial but I've still met people who have had issues with it. So if you want access to all the system files we talked about that are stored in the shadow volumes that aren't available to you using vssadmin/api routes, such as the $mft, $logfile, $usn journal and more, then I'll give you an easy work around.

Step 1. Download SIFT http://computer-forensics.sans.org/community/downloads
Step 2. If you don't already have vmware workstation/vmware player then download it from www.vmware.com
Step 3. If your image is a multipart e01, aff, etc.. then mount your image using ewmount/affmount first to make it appear as single raw image
Step 4. Use vshadowmount to mount the single raw image, whether whole or virtual and this is where the key step is. When you do this step pass in an extra option: -X allow_other as seen below:
vshadowmount -X allow_other  /mnt/
Step 5. Point FTK Imager to a image file located on \\siftworkstation\ and add each volume shadow you want to extract data from.

You can see Joachim's mounting instruction page here which references this fact:
https://code.google.com/p/libvshadow/wiki/Mounting

but what this not clearly spell out is that if you don't clear that option from fuse.conf you will not be able to allow non root users access to the mounted directory. Allowing non root users is necessary for how i'm using SIFT/libvshadow for is exposing the mounted shadows to Windows. Not allowing non root users affects your ability to let CIFS expose the mounted shadow copies to other networked machines. This network share access to mounted volume shadow copies in Linux what I do to speed things along on machines I don't have the native windows libvshadow compiled on, or where dokan fails to compile.

I mount with vshadhowmount -X allow_other and then I point FTK Imager to the \\siftworkstation network shares that it exports by default and access the shadow copies as raw images in FTK Imager to export out the system files not exposed with the native Linux NTFS driver.

Hopefully this is helpful and in the near future all our tools will adapt enough where we don't have to do this, but until then this works 100% of the time for me when all else fails.



Tuesday, August 6, 2013

Daily Blog #43: Sunday Funday Winner 8/5/13

Hello Reader,
      Another Sunday Funday is behind us and some more great answers were given, thanks to everyone who submitted on Google+ and anonymously! I've learned from this week challenge that I need to be a bit more specific to help for more focused answers, I'll make sure to do that for next weeks challenge. This week Eric Zimmerman turned in a great answer sharing the win with Jake Williams.

Here was the challenge:
The Challenge:     SInce we are giving away a copy of Triage, lets have a question related to manually triaging a system.
For a Windows XP system:
You have arrived onsite to a third party company that is producing a product for your company. It is believed that one of the employees of the company has ex-filtrated the database of your customers information your provided for mailing and processing sometime in the last 30 days, While the third party company is cooperating with the investigation they will not allow you image every system and take the images back to your lab. However, they will allow you to extract forensic artifacts to determine if there is evidence of ex-filtration present and will then allow a forensic image to be created and taken offsite.
With only forensic artifacts available and a 32gb thumbdrive what artifacts would you target to gather the information you would need to prove ex-filtration?

Here is Eric Zimmerman's winning answer:
Since this is a triage question, the goals are to get as much info in as short a time frame as possible. the idea is to cast as wide a net into a computers data as possible and intelligently look at that data for indicators of badness.
i am not going to include every key, subkey, querying lastwrite times/value and how to decode things from the registry or otherwise mundane details. these steps should be automated as much as possible for consistency and efficiency anyways.
the first thing i would do is interview management at the company to find out what kind of usage policies they have: are employees allowed to install whatever software they want? any access controls? who has rights to where? What kind of database was my customers stored in? who has rights to that database? and so on
i would also ask management who their competitors are and then locate their web sites, domain names, etc.
once i had the basic info i would assemble a list of relevant keywords (competitor names, relevant file extensions, etc). i would also look specifically for tools that can be used to connect to the database server and interact with it. this of course changes depending on which database it is (mysql i may look for putty or other terminal programs, oracle = the oracle client, sql server = that client, LinqPad, etc.)
with that basic info in hand i would triage each computer follows:
1. collect basic system information such as when windows was installed, last booted etc.
2. check running processes for things like cloud storage (dropbox, skydrive, teamviewer, other remote access tools)
3. look for any out of the ordinary file shares on the computer that can be used to access the computer from elsewhere on the network
4. check MRU keys for network shares, both mapped and accessed via command line
5. dump DNS cache and compare against keyword lists
6. dump open ports and compare against a list of processes of interest.
are any remote access tools running? file sharing?
7. Look to see what data, if any, is present on the clipboard. are there any suspicious email addresses or the text of an email or other document? what about a file or a list of files?
8. unpack all prefetch files and see what applications have been executed recently (certainly within the last 30 days, but expand as necessary). again we key in on processes of interest, etc
9. look at all the installed applications on a computer and specifically those installed within the last 30 days
10. dump a list of every USB device ever connected to the machine including make, model and serial #. also reference, when available, the  last inserted date of the device. cross reference this list with any issued thumb drives the company provided from interviews. make a note of any drive letters devices were last mounted to. also process and cross reference setupapi.log for devices connected within the last 30 days.
11. dump web browser history for IE, FireFox, Chrome, and Safari and look for keywords, competitor URLs, etc. hone in on last 30 days, but look for keywords thru entire history in case things were initiated previous to the data being exfil'ed. look for hits against cloud storage, VNC, and similar.
12. dump web browser search history including google, yahoo, youtube, twitter, social networks, etc and again filter by last 30 days with keyword hits across all date ranges. Also look for references to file activity such as file:///D:/somePath, etc.
13. dump passwords for browsers (all of them), mail clients, remote access tools, network passwords (RDP, etc). are any webmail addresses saved by the browsers?
14. dump keys from registry including CIDSizeMRU, FirstFolder, LastVisitedMIDMRU, LastVisitedMIDMRULegacy, MUICache, OpenSavePidlMRU, RDP sessions, RecentDocs, TypedPaths, TypedURLs, UserAssist, appcompatcache and of course ShellBags. all of these keys should be checked for keyword hits as before. specifically, look for any USB
15. Look for instant messaging programs and chat history for skype to include who they are talking to, if any files were xfered, and so on.
16. look for any p2p programs that could have been used to xfil data.
17. search the file systems for such things as archives, shortcut files (lnk), evidence eliminator type programs, drive and file wiping programs, etc. cross reference any lnk files with paths used by USB devices and shellbags to get an idea of what kinds of files were kept on any externally connected devices. look inside any archives found (zip, rar, tar, 7zip, etc) for any keywords of interest (like a text file containing my customers). filter based on MAC dates for files and of course look for keyword hits.
18. look at event logs for relevant entries (what is relevant would be determined by how the computers are configured. what kind of auditing is enabled by the network admins, etc). things like remote access and logins, program execution, etc would be key here.
19. time permitting, and based upon the results from above, use a specialized tool to unpack restore points and look for files as outlined above (lnk files, programs installed, etc)
20. look in the recycle bin for files (hey, ive worked plenty of cases where the incriminating evidence was in there!)
21. dump ram and run a quick "strings" against the binary, then look for keywords. going crazy with volatility is beyond triage, so this will suffice.
depending on where the database lives i would triage that system in the same way (if windows based) but if its mysql on linux or something i would review bash history files, sign ins, FTP logs, etc for signs of data being ex-filed. i would look at the database log files for logins and, if available, sql statements executed, errors, etc from the last 30 days.
finally i would ask about and review any web proxy logs or other logging systems the company has to look for suspicious activity.
all of this data would be automatically added to a timeline that could then be used to further narrow in on interesting periods of activity on each system.
with all the data collected i would want to start looking for default export names or extensions, keyword hits, and whatnot. the machines that have more indicators would go up on my list of machines to want to image. machines with little to no indicators would be removed from consideration.
ShellBags are going to be a key artifact in this case because they contain sooo much good data on Win XP. what other files were on any external devices connected to the systems? do i see the presence of "hacking" tools, ftp clients, putty, etc? are there folders or files indicative of my data or any of my competitors?
32GB is more than enough space to triage all the computers found at the business as there isnt a ton of need to copy files off the computer.
now all those steps are a heck of a lot to do manually (and several of them would be near impossible to do by hand), so in my case i would just run osTriage on each computer and it would pull all that info (and more) in a few seconds. add a bit of time to review the results and i would know which machines i wanted to image for a more thorough review.
with that info in hand i would most likely already know who exfi'led the data, but i would still request an image be made of each machine where suspicious activity was found.
(all of those steps could be further unpacked, but since this is a triage based funday question my response is kept in true triage style, fast and just enough of a deep dive to hone in on computers of interest).

However, Special Agent Zimmerman cannot accept the prize. So Jake Williams hard work in his winning answer seen below wins the prize of a year license of AccessData Triage:
 
What artifacts would you look for across multiple Windows XP machine with only a 32GB USB drive to hold them all?
So we think that an evil user exfiltrated a database we provided to the business partner.  Because of the verbiage, we’re working under the assumption here that they were provided with an actual database file (.mdb).
Great. That probably wasn’t bright. In the future, we should NOT provide the business partner the database file and rather provide secure and AUDITABLE access to the data.  This seems like a good idea. There are other issues here, such as revocation of access and even keeping the current data picture (including opt outs for example) that further reinforce why this is better than a file. So we should definitely provide auditable access to the DB in the future, not a database file.
For this writeup, I’ll focus on evidence of execution, evidence of access, and then touch on potential evidence of exfiltration.  Here’s why: under the best of circumstances, we can have a hard time finding evidence of exfiltration. But these aren’t the best of circumstances. 
1. We have no information about how the partner may have exfiltrated the data.  
2. We have limited space in which to collect our data for further probable cause.
We’re really looking for suspicious activity on the machines that will open the door to full images for a complete investigation.  For that reason, we have to keep the scope small and limit it to that which will cover the most ground.
Evidence of execution:
So the first thing I want is access to prefetch files on all the machines.  This is my first stop.  If the user exfiltrated the database AND we have a DLP solution in place, they may need to encrypt the file first. I’d want to look for rar.exe, winzip.exe, or 7z.exe to look for evidence of execution of those utilities. Also, we’re looking for evidence of execution of any anti-forensics tools (commonly used when users are doing illegal stuff).  As a side note here, I’ve performed forensic investigations where I’ve found stuff like wce.exe or other “hacking tools” in prefetch.  In at least one particular case, this discovery was not part of the investigation specifically.  However, the fact that we highlighted it bought us a lot of good will with the client (since this was an indicator of a compromise or an AUP violation).
We’d want to know if the users used any cloud services that aren’t explicitly allowed by policy. For example, Dropbox, SkyDrive, GoogleDrive, etc. would be interesting finds.  While use of these services doesn’t necessarily imply evil, they can be used to exfil files.  Evidence of execution for any of these services would provide probable cause to get the logs from the devices.  For those who don’t know, this is a real passion of mine.  I did a talk at the SANS DFIR Summit looking at detecting data exfiltration in cloud file sharind services and the bottom line is that it isn’t easy. Because of the complexity, I expect criminals to use it more.  Those logs can contain a lot of information, but grabbing all logs in all possible user application directories might be too broad (especially given the 32gb USB drive limitation).  We’ll just start small with Prefetch. 
I’d also want to get uninstall registry keys (HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall). My thoughts here are that 32GB is so little data for an enterprise that I’d be looking for evidence of programs installed that may have been used to read the data from the database or exfiltrate the data.  Again, this is so little data that we can store it easily.
UserAssist registry keys from all users would also be on my shopping list.  If the company uses a domain (and honestly what business doesn’t) this will be easier if roaming profiles are enabled.  We want to pull from these two keys for windows XP:
▪ HKEY_USERS\{SID}\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{GUID}\Count\
▪ HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\{GUID}\Count\
Where GUIDs are usually {75048700-EF1F-11D0-9888-006097DEACF9} or {5E6AB780-7743-11CF-A12B-00AA004AE837}
 Again, I’m focusing on evidence of execution because space is tight. These entries won’t cover everything that was executed, generally it only includes items opened via Explorer.exe (double click).  Also, the entries are ROT13 encoded, but that’s easily overcome. Because it is possible that users deleted data, we might also want to grab UserAssist from NTUSER.DAT files in restore points.  This might be pushing the limit of my storage depending on how many machines our target has to triage (and how many Restore Points they each have).
Evidence of Access:
In this category, I’d be looking at MRU keys for Access.  Now these change with the version of MS Office, but a good starting point is to look in these subkeys in the user’s profile (where X.X is the version):
• Software\Microsoft\Office\X.X\Common\Open Find\Microsoft Access\Settings\Open\File Name MRU
• Software\Microsoft\Office\X.X\Common\Open Find\Microsoft Access\Settings\File New Database\File Name MRU
• Software\Microsoft\Office\X.X\Access\Settings
Locating our filename doesn’t prove anything, presumably we gave it to them to open, but it gives us a start.
If we know that the file was placed on a network share with auditing enabled, we want to identify who had access to that share using the records in the Security event log.  If auditing wasn’t enabled, we may still be able to find evidence of failed logon attempts to the share in the event logs on the file server.  Successful connections to the share may be found be in the MountPoints2 (Software\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2) key so we want to grab that from user’s profiles.  Of course, it goes without saying that just because someone mapped a share doesn’t mean they even read our file (let alone exfiltrated it).
Event logs:
Depending on the event logs available, we may be able to tell if a user has accessed the database via an ODBC connector.  Usually users just open an Access file, but they could add it as an ODBC data source.  I don’t have my systems available here at DEFCON to do testing, but if the file was added as an ODBC source, there should be some remnants left over to locate.  But often there will show up in event logs. We want to check event logs for our database file name.
Possible Evidence of Exfiltration:
Firewall logs are another item I’d collect.  Yes, I know some people will laugh at me here, but we are looking for data exfiltration and that may have happened over the network.  If we have some idea of where the data was exfiltrated to, firewall logs, if enabled, are a useful source of information.  Fortunately for our case with only a 32GB USB drive for the whole network, the logs capped at 4M by default.  This allows us to collect a lot of them without taking up lots of space.  We could get logs from 100 machines and only consume 4GB of our space.
Setupapi.log is another file I’d like to collect.  This log shows first insertion time for USB devices (a common exfiltration point).  While this log can’t tell us if a file was copied to a USB, analyzing setupapi.log files over an enterprise can show patterns of USB use (or misuse).  Correlating that with information with their security policy may yield some suspicious behavior that may be probable cause for further forensic images.
If there are other logs (from an endpoint protection suite) that log connections, I’d want to see if I could pull those as well.  While we’re at it, we’d want to filter event logs (particularly application event logs) for detection notices from the AV software.  What we are looking for here is to determine if any of the machines in scope have had infections since we turned over our database file.  We can filter by the log provider and we probably want to eliminate startup, shutdown, and update messages for the AV software.
If I had more space, I’d grab index.dat files from profile directories.  Depending on the number of systems and profiles, we’d probably run out of space pretty quickly though.  What we’re looking for here are applications that may use WinInet APIs and inadvertently cache information in index.dat files.  This happens sometimes in malware and certainly data exfiltration applications might also fit the bill.  However, my spidey-sense tells me that these index.dat files alone from many profiles/machines could exhaust my 32GB of space.
Parting thoughts:
Forensics where we rely on minimal information is a pain.  You have to adapt your techniques and triage large numbers of machines while collecting minimal data (32GB in this case).  I’d like to do more disk forensics and build timelines. I might even use the NTFS triforce tool.  If this were a single machine we were performing triage on, then my answer would certainly involve pulling the $USNJrnl, $LogFile, and $MFT files to start building timelines. The SYSTEM, SOFTWARE, and NTUSER.DAT hives on the machine would also be on my short shopping list.  However, over the multiple machine I believe the scenario covers, this just isn’t feasible in the space we’ve been given.

I'll follow up this contest with how I approached this case in real life in a later blog post. I will say that in my case the first thing I did was triage which systems showed access to the database itself to create a pool of possible ex-filtraters. Then I went back and started pulling the data discussed in our two winning answers! From there I was able to discover enough suspicious activity and patterns of access to the underlying data through the userassist, shellbags and lnk files to get approval to create a forensic image.

Tomorrow we continue the web 2.0 forensics series as I look to see when I should stop and move on and then come back to it later with other services besides Gmail.

Sunday, August 4, 2013

These Four Fell This Week - Teardowns

They aren't tearing them all down but there's a boom. Styles have changed. We're mostly getting efficient 4,000+ square foot American Foursquares with a full width porch and 2-car garage. We'll see what we get on these lots.

 
This is now.

IMG_2725-2013-07-28-1336-LANIER-blvd-teardown-before
The week before. The houses to left and right have already been done. Who remembers the cute houses that they replaced?

IMG_2816-2013-07-01-Teardown-demolition-3130-Lanier-Drive-at-Windsor-Parkway-near-Oglethorpe
They tore it down this very day August 1, 2013. I blogged it a year ago when it first went on the market. Great lot in great neighborhood near Oglethorpe, doomed.

IMG_2638-2013-07-25-1126-Spring-Valley-Teardown-before
Gone by noon.

IMG_1901-2013-07-09--1328-Greenland-Teardown-in-context-hill-detail
This is one of  those streets with smallish, un-updated 70+ year old houses on big lots in Morningside. Many were done pre-crash.

2013-08-03-1147-ORME-CIR-teardown-before-3
This one hurt a little bit. Property tax records say 1910 with about 1,200 square feet. It was photogenic but quite so charming in person. I passed it 100's of times and only noticed the sculpture.

July Favourites!

I rounded up my top 5 favourite beauty products throughout the month of July and filmed exactly why I loved them, which you can watch below!

JULY FAVOURTIES

Urban Decay Revolutionize Lipstick in Streak � This (along with MAC�s limited edition Sushi Kiss lippie) has been my most used lipstick throughout the month of July.  It�s the most luxurious, peachy pink lipstick which really suits my pale complexion.  I am wearing this in the video above and it is just beautiful.  Props to Urban Decay for creating this beauty!

Clarins Instant Light Lip Perfector in 02 � A second lip product here, and this time it�s one of the much loved & spoken about Clarins Lip Perfectors.  I recently got my hands on this and it hasn�t left my makeup bag since.  It�s a caramel-scented light, non-sticky lipgloss which just adds the nicest sheen to my pout.  I could sniff this all day.

Cover FX Total Cover Pressed Powder � July has been all about matte skin for me; it was a ridiculously hot and humid month which means that my face looked pretty much permanently oily, so used a pressed powder was a daily (and hourly) occurance.  I love this pressed powder for its buildable coverage and pale shade which doesn�t leave me looking orange at all.

Skin Doctors Hair No More � I�ll admit now I�m super lazy in the hair department.  I�ve never had a wax (shock-horror) and try and get away with shaving as less as I can (not painting a great picture of myself, am I, ha!).  Well, I�ve been using this beauty in July and I have already started seeing a slight difference.  It�s a hair inhibitor spray which helps to minimise the growth of hair.  It can be used all over but I�ve started off using it just on my legs, just spritz it on and massage it in a couple of times a week and you should start seeing a difference after a couple of weeks.  I am adamant to keep this up and use it religiously this summer!

Sleek Ink Pot Gel Eyeliner in Dominatrix � I�ve scrimped on eyeshadow in July and just focused on eyeliner to give me that big-eyed look and this has been my gel liner of choice!  This is ridiculously affordable, at around �5 including a brush, and is a fantastic quality giving me the most perfect, matte black winged liner look.  This is my third pot of this is my third pot of this stuff (I�ve been using it for a couple of years now) and favour it over any other high-end gel liner.  Try it out!

Extras
Favourite Band: Little Comets � I have been absolutely obsessed with Little Comets throughout the entire month and still can�t stop listening to their songs.  My favourites of theirs are: In Blue Music We Trust, Figures, Little Opus, Isles� and ALL the others.  I really urge you to give them a listen and I hope I get to see them live soon.

Books: I read two books this month; The End of Alice by A.M Holmes and Lolita by Vladimirovich Nabokov.  Both are based around the same (disturbing) topic and both have been interesting (?) to read.  I wouldn�t necersarily recommend them as they aren�t nice books but I thought they are both well written.  I�m definitely going to read a happy book next!

Lovebox Festival: I went to Lovebox with some of my favourite people a couple of weekends ago and had the best time.  I have a post coming up about it shortly!

Filming with DailyMix: I was super lucky to film with Tanya Burr on Dailymix in July � we had a great little chat about highstreet/highend makeup dupes!  I really can�t wait for the video to go live, I�ll let you know as soon as it does!

What have been your favourite products throughout the month of July?  Anymore books to recommend me (happy ones)?

xxxx

Daily Blog #42: Sunday Funday 8/4/13

Hello Reader,
           It's that time again, Sunday Funday time! For those not familiar every Sunday I throw down the forensic gauntlet by asking a tough question. To the winner go the accolades of their peers and prizes hopefully worth the time they put into their answer. This week we have quite the prize from our friends at AccessData. 

The Prize:
The Rules:
  1. You must post your answer before Midnight PST (GMT -7)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post

The Challenge:
     SInce we are giving away a copy of Triage, lets have a question related to manually triaging a system.

For a Windows XP system:

You have arrived onsite to a third party company that is producing a product for your company. It is believed that one of the employees of the company has ex-filtrated the database of your customers information your provided for mailing and processing sometime in the last 30 days, While the third party company is cooperating with the investigation they will not allow you image every system and take the images back to your lab. However, they will allow you to extract forensic artifacts to determine if there is evidence of ex-filtration present and will then allow a forensic image to be created and taken offsite. 

With only forensic artifacts available and a 32gb thumbdrive what artifacts would you target to gather the information you would need to prove ex-filtration?

Good luck! I look forward to your answers. 

Saturday, August 3, 2013

Battery Care is the best software to save and monitor Laptop's battery

Battery care is one of the best softwares which are created to save and monitor Laptop's battery.
Moreover, it also enhances the performane of your Laptop. It also shows the time left to be used and also shows  your battery power.
So download this best and all in one Battery care software now to increase your Laptop's battery timing.

 

 
 

Daily Blog #41: Saturday Reading 8/3/13

Hello Reader,
           It's Saturday and after a long week of working, heck you might be in the office working right now, its time to let the disks image, the indexes run and the hashes hash while you sip some coffee and do some forensic reading.

1. If you haven't watched/listen to it already we had a pretty great Forensic Lunch yesterday, you can watch it here http://www.youtube.com/watch?v=UG8ZZM7S5nk. This week we talked about HTML5 offline caching in gmail with Blazer Catzen, the life of an internal corporate forensics person with Brandon Foley, Shadow Kit with David Dym and updates to some OSX forensics and the Triforce. Give it a watch and next week you can watch us live and participate here Google+ Event.

2. Speaking of Blazer Catzen he had a great presentation at Techno Forensics on file system tunneling. He said we could upload and share the slides from his presentation and you can download it here: click here for the zip of the presentation and reference spreadsheets

3. In the Forensic Lunch I talked about an article from a couple years ago describing the offline gmail storage we were talking about and the risks to the user, you can read it here http://geeknizer.com/pros-cons-of-html-5-local-database-storage-and-future-of-web-apps/

4. I'm a big fan of WinFE and over on the WinFE blog they had a good write up on getting WinFE to build with Autopsy 3. If you are looking for a free and open source portable toolkit thats still windows based read about how to get it all together, http://winfe.wordpress.com/2013/07/15/more-on-winfe-and-autopsy/.

5. We talked about Shadow Kit this week so here's a link to read more about it and grab a copy, http://redrocktx.blogspot.com/p/shadowkit.html. You should then read this post, http://redrocktx.blogspot.com/2012/04/shadowkit-working-with-disk-images.html, which is a great write up on how to get your forensic image into a vhd format so Windows will treat it as a physical local device rather than as a network attached device as it does with FTK Imager and other mounting technqiues.

6. If you are working with Windows 8 or Windows Server 2012 then you'll be happy to read the latest SANS blog entry by Chad Tilbury pointing out which tools now support their memory structures, you can read it here: http://computer-forensics.sans.org/blog/2013/07/30/windows-8-server-2012-memory-forensics.

7. A new blog I found and has just been updated this week is from French expert Zythron. He has a humorous yet factual writeup of a case he worked on and his process and approach, http://zythom-en.blogspot.com/2013/07/filling-up-on-pr0n.html.

Thats what I have for you this week, have an article or blog that you think I'm missing? Leave a comment and leave a link, I'm always trying to learn more and find more researchers who are sharing their data. 

Daily Blog #40: Web 2.0 Forensic Part 5

Hello Reader,
                    In the past posts in this series we've focused on what you can recover from web 2.0 sites, how data sits on the disk and how data is transmitted across the network. In this post we talk about what these messages fields mean and how to build a quick carver for them. Tomorrow is Saturday Reading and I will be including a link to today's Forensic Lunch cast which i think was the best so far!

Mail folder summary view versus Mail folder full view:
What I noticed in viewing the data as it went across the network is that there are two distinct types of data streams being sent, at least to chrome. The first being the page of the mailbox you requested which contains the message summaries as well as the message contents themselves. The second being additional pages of the mail folder being viewed where only the message summaries are being sent and cached for faster loading to the user.

The full view is the first page sent and contains data in two sections, the first is the message summary for example here is a message summary for my daily win4n6 mailing list digest:

,["cs","140395ee6229f7d4","140395ee6229f7d4",1,,,1375366638336000,"140395ee6229f7d4",["140395ee6229f7d4"]
,[]
,[]
,[["140395ee6229f7d4",["^all","^i","^smartlabel_group","^unsub"]
]
]
,,,[]
,[["","win4n6@yahoogroups.com"]
,["No Reply","notify-dg-win4n6@yahoogroups.com"]
]
,,,[]
,[]
,,,"Digest Number 1388","[win4n6] Digest Number 1388"]
,

Each section of the inbox view with full messages starts with ["cs" which i'm guessing to mean 'content start' and ends with ,["ce"] as shown below. 
]
,0]
,["ce"]
So we can recover full messages with a regex as simple as 
(\["cs",.+\["ce"\]) 

However this is a greedy expression and may capture multiple messages within it.

Other fields of interest in the header include the message number internally assigned by gmail this can be seen as "140395ee6229f7d4", the message sender "win4n6@yahoogroups.com" and subject ""[win4n6] Digest Number 1388"". 

When the content of the message begins you will see ["ms" which again I can only assume is short for message start as seen below:

["ms","140395ee6229f7d4","",4,"win4n6@yahoogroups.com","","win4n6@yahoogroups.com",1375352053000,"There are 5 messages in this issue. Topics in this digest: 1a. Re: TightVNC F...",["^all","^i","^smartlabel_group","^unsub"]
,0,1,"[win4n6] Digest Number 1388",["140395ee6229f7d4",["win4n6@yahoogroups.com"]
,[]
,[]
If this a mail folder summary view (which I've seen for pages preloaded after the first) this would be the end of content cached and retrievable. If this is the first page of the mail folder then it will be followed with the text of the message itself

,["No Reply \u003cnotify-dg-win4n6@yahoogroups.com\u003e"]
,"[win4n6] Digest Number 1388","There are 5 messages in this issue.\... Huge message digest here removed for readability\n",[[]
,[0]
,"",[]
]
,0,[[]
,[["win4n6","win4n6@yahoogroups.com"]
]
,[]
,[]
,[]
,[]
]
,"Thu, Aug 1, 2013 at 5:14 AM",[]
,1,0,0,0,1,"returns.groups.yahoo.com","yahoogroups.com","","\u003c1375352053.298.19336.m7@yahoogroups.com\u003e","[win4n6] Digest Number 1388","\u003cwin4n6.yahoogroups.com\u003e",,[0]
,,[]
,,0,[0]
,-1,,,[]
,[]
,0,0,1,0,0,,,[]
,,5314,-1]
,,0,"5:14 AM","5:14 am",0,,,"",["en"]
,0,"Thu, Aug 1, 2013 at 5:14 AM",[]
,,,,0,,"win4n6.yahoogroups.com",,0,1,"","win4n6@yahoogroups.com",[[]
,[["win4n6","win4n6@yahoogroups.com"]
]
,[]
,[]
,[]
,[]
]
,-1,,,,"yahoogroups.com",,[]
,[[[2013,7,31,5,37,,0,0]
,,"Wed Jul 31, 2013 5:37 am",0,0,0,0]
,[[2013,7,31,10,6,,0,0]
,,"Wed Jul 31, 2013 10:06 am",0,0,0,1]
,[[2013,7,31,8,28,,0,0]
,,"Wed Jul 31, 2013 8:28 am",0,0,0,3]
,[[2013,7,31,8,42,,0,0]
,,"Wed Jul 31, 2013 8:42 am",0,0,0,4]
,[[2013,7,31,8,50,,0,0]
,,"Wed Jul 31, 2013 8:50 am",0,0,0,6]
]
,0]
,["ce"]
You'll notice there is no matching message end (me) to the message start (ms) as we saw in the cs and ce pairing earlier. Instead the message ends with some index data about the messages in the thread related to this message so it can display them easily and finished with "ce"] again.

For each message retrieved from gmail you'll find these pairings. On Tuesday I'll dig into the javascript that interprets this data to see if we can find more data points for analysis. Until then happy hunting for gmail fragments and I hope you stick around for tomorrow's Saturday reading and Sunday Funday!