forensicmike1 http://172.31.17.250/ #DFIR | #RE | #OtherGeekThings => Views expressed are my own. Sun, 22 Aug 2021 14:40:21 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.13 Taking a gander at iOS apps on an M1 Mac https://forensicmike1.com/2021/05/25/taking-a-look-at-ios-apps-on-an-m1-mac/?utm_source=rss&utm_medium=rss&utm_campaign=taking-a-look-at-ios-apps-on-an-m1-mac https://forensicmike1.com/2021/05/25/taking-a-look-at-ios-apps-on-an-m1-mac/#respond Tue, 25 May 2021 20:47:39 +0000 https://3.88.229.156/?p=570 I wanted to share some initial research I did over the rainy long weekend. I recently got access to a MacBook Pro with the M1 chip and so naturally I wanted to take a look at how running iOS apps natively on macOS works, where the app data ends up on the HDD, and of […]

The post Taking a gander at iOS apps on an M1 Mac appeared first on forensicmike1.

]]>
I wanted to share some initial research I did over the rainy long weekend. I recently got access to a MacBook Pro with the M1 chip and so naturally I wanted to take a look at how running iOS apps natively on macOS works, where the app data ends up on the HDD, and of course, to see if Frida works on it or not. πŸ™‚

Given the large number of iOS apps that you can presently install via the App Store, it’s definitely conceivable that we might start seeing instances of this on Mac extractions, so I thought it might be helpful to have a frame of reference for where iOS app data is located.

For this research, I decided to install Private Photo Vault (which continues to bring a ton of traffic to my site to this day haha), but I suspect the story will be similar for other apps.

First things first, let’s find PPV on the App Store. One note is that you need to click on the iPhone & iPad Apps filter, which seems obvious now but didn’t immediately jump out when I started this before my coffee activated.

Okay great, it’s installed. It shows up on the LaunchPad just like any other installed app, neat.

OK great, let’s launch the app and set it up, and take a photo. The app seamlessly integrates with the MBP’s front facing camera (which still produces pretty awful looking pictures compared to iPhone).

First I’ll check if Frida sees the process by running

frida-ps | grep Vault.

We see two processes of interest including Photo Vault and Keychain (Photo Vault):

More on my adventures with frida in a moment, but let’s take a moment to search for the data. With the help of the find command (and knowing by heart at this point what files and folders PPV generally leaves in the filesystem on iDevices), I located the data for the app at:

/Users/(name)/Library/Containers/(Installation GUID)/Data/Library

As seen in the below screenshot, within this folder we have similar contents as PPV on iOS, except with a ton of other folders too!

So what is going on here? If we do an ls -l it becomes clearer:

Any of the items starting with l are symlinks! In fact, if we step up a folder to /Data we can see Desktop, Downloads etc. and all of the folders we expect to see in ~/ for our user, are here again, as symlinks.

I’m sure there’s a good reason for this, but it’s something to be aware of when traversing these app directories as it can spam us pretty bad when using the find or ls command πŸ™‚

Fortunately, when we create a tar file with the contents of the Data directory, it does not fully traverse the symlinks. Here I did: tar cvf ~/Desktop/ppv_macos.tar .

And as you can see, the symlinks are still present but listed as ~20 byte files instead of directories.

And 7-zip considers them dangerous and doesn’t bother to create them when you do a full extraction:

Frida

I now wanted to see if I could attach with Frida. First attempt, I got an error saying my user account didn’t have permissions to attach. I remembered seeing this before, and after googling I found it indeed it had to do with SIP (System Integrity Protection) being enabled. To disable, one must shutdown and then restart into recovery. Note that on M1 Macs, instead of holding Cmd+R and booting, you actually just hold down the power button until the boot options screen appears. After booting to recovery a good ol’ csrutil disable from the terminal still does the trick.

After rebooting, I went to startup PPV again and was met with this sad message:

So, it looks like some additional work will be required in order to deal with this. I did find this tweet from @SparkZheng though, so looks like it is possible with resigning?, but I haven’t had time to go further just yet.

Notwithstanding that, if you don’t have yourself a test iDevice but do have an M1 Mac and are interested in accessing filesystem artifacts without having to do repeated extractions, this could definitely be a timesaver.

The post Taking a gander at iOS apps on an M1 Mac appeared first on forensicmike1.

]]>
https://forensicmike1.com/2021/05/25/taking-a-look-at-ios-apps-on-an-m1-mac/feed/ 0
Analysis of the ABTraceTogether app (iOS) https://forensicmike1.com/2020/05/01/analysis-of-the-abtracetogether-app-ios/?utm_source=rss&utm_medium=rss&utm_campaign=analysis-of-the-abtracetogether-app-ios https://forensicmike1.com/2020/05/01/analysis-of-the-abtracetogether-app-ios/#comments Sat, 02 May 2020 03:58:53 +0000 https://3.88.229.156/?p=495 I decided to have a look at the ABTraceTogether contract tracing app released by the Alberta Government today (May 1 2020) and blog about my findings. There’s potential for conspiracy theories and disinformation to run rampant for an app like this, so I wanted to have a look for myself and see how it actually […]

The post Analysis of the ABTraceTogether app (iOS) appeared first on forensicmike1.

]]>
I decided to have a look at the ABTraceTogether contract tracing app released by the Alberta Government today (May 1 2020) and blog about my findings.

There’s potential for conspiracy theories and disinformation to run rampant for an app like this, so I wanted to have a look for myself and see how it actually works.

I was also curious to see if there might be any forensically valuable information found in the app’s databases and files.

I’ll start with my general observations and provide a more detailed explanation afterwards.

As I’ve added a couple of updates today (May 2) – any new info is marked in green.

Observations

The registration process does not prompt you for your name, email, or any other PII (personally identifiable information) except for one: you must register using valid phone number, which will be used to contact you in the event you come into contact with someone who contracted the virus.

The app is built on BlueTrace/OpenTrace, which is open source and has published a whitepaper that explains its methodology in great detail. It was first used in the TraceTogether app in Singapore beginning in March 2020.

Encounters between devices are only tracked locally and must be uploaded to AHS manually (and voluntarily) if they contact you and request you to do so. In my tests the app did not communicate with the server any more than necessary (such as to retrieve encrypted forward-dated Temp IDs).

Analysis of the tracing database did not net any information of significant forensic value. Encounters between devices are logged however the only information available is: 1) the other device’s make and model, 2) the host device’s make and model, 3) the time of the interaction, 4) indicators of how close the device came such as received signal strength indicator (RSSI). The remaining data is encrypted and not accessible without keys that AHS maintains.

In the BlueTrace design, the server (and its security) is of utmost importance. While out of scope for this article, I think it is worth noting that given all encryption keys, IDs, tempIDs, and registered phone numbers are stored on the server, any sort of poorly configured or insecure endpoints could pose the largest risk (such as in the event of a data breach).

Overall, the app appears to deliver on its privacy promises. I did not find much of potential forensic value in artifacts from the app’s sandbox. The app’s biggest failing, I think, is the requirement (iOS only) to keep the phone unlocked with the screen active at all times. I just can’t see people doing this – they will be on their phones, which will mean this app isn’t in the foreground and thus not working. I do acknowledge this limitation is not the fault of the developer, but rather the restrictiveness of iOS. Hopefully future development, such as with the recently released Apple/Google contact tracing API, the need for leaving the device unlocked can be eliminated.

UPDATE 2020-05-02: As pointed out by user Chris Thompson (@yegct), another curiosity is that the OpenTrace project seems to be using a GPL license, which would be potentially problematic as this license dictates anything it ships with be licensed under GPL as well. I found this github issue on the repo also questioning the same.

Static analysis

I obtained a copy of the app (version 1.0.0) on my test iPhone 6S running iOS 13.2.2. I used frida to obtain a copy of the IPA with a decrypted app binary and then used Hopper (macOS) to examine it.

It appears to be a small, straightforward app with not a lot of code to examine. It’s written in Swift which makes the static analysis a bit less intuitive.

The app uses a library called OpenTrace (which is an implementation of BlueTrace). BlueTrace has published a whitepaper explaining the technical methodology and, I feel gives very solid explanations for why things are the way they are.

UPDATE 2020-05-02: Just a minor clarification on the above paragraph – rather than consuming any pre-compiled library or framework, it appears that the OpenTrace code has been integrated directly with the ABTraceTogether codebase under the ABTraceTogether class. This does not mean there aren’t variations. However, I did test several strings from debug messages found in the OpenTrace code and located all of them unmodified in the ABTraceTogetherApp binary.

Info.plist

The app’s Info.plist contains some interesting info, such as developer specified descriptions for permissions potentially requested.

NSBluetoothAlwaysUsageDescription – ABTraceTogether exchanges Bluetooth signals with nearby phones running the same app. These signals contain an anonymised ID, which is encrypted and changes continually to ensure your privacy.

NSCameraUsageDescription – Grant ABTraceTogether permissions to access your camera if you would like to upload a photo as part of a support request

NSPhotoLibraryUsageDescription – Grant ABTraceTogether permissions to access your photo library if you would like to upload a photo as part of a support request

The plist also specifies that at a minimum, iOS 13 is required. This requirement was interesting to me because the app does not use the new Apple/Google API. It could limit the ability for people using older hardware that can’t run iOS 13 to access it.

Overall, nothing super surprising here. The camera/photo gallery permissions didn’t come up in any of my tests.


Nothing else of note resulting from static analysis. Future research -> to obtain a copy of the Android version and review it as well.

Forensic value of filesystem artifacts

The database tracer.sqlite, located in Library/Application Support/, and specifically the table ZENCOUNTER is where interactions between nearby devices are logged.

To generate a test encounter, I installed the app on my own iPhone (in addition to my research phone), and with the app open, brought the devices fairly close together.

This test showed up in table ZENCOUNTER as follows:

ColumnDescription
ZVThe version of BlueTrace protocol the other device is using (currently 2)
ZRSSIThe received signal strength indicator (RSSI) – can be used to assess how close the devices actually got.
ZTIMESTAMPWhen the encounter took place.
ZTXPOWERTransmission power? Always 0.0, 7.0 or NULL in my database so far.
ZMODELCA device make and model. Can be the other device or our device. C is believed to refer to “Central”. See this link for more on the OpenTrace github for Encounter Record.
ZMODELP A device make and model. Can be the other device or our device. C is believed to refer to “Peripheral”. See this link for more on the OpenTrace github for Encounter Record .
ZMSGAn encrypted payload, base 64 encoded including IV/Auth Tag. (84 bytes)
ZORGThe organization code indicating the country / health authority with which the peripheral is enrolled.

The ZMSG structure is described in the whitepaper as follows:

(Note the typo of AED which should read AES).

The forward-dated tempIDs were found in a file under Library\Caches\ca.ab.gov.ahs.contacttracing\fsCachedData\{GUID} . The contents of the file is shown here but redacted slightly to not show the full tokens:

Future work

One remaining bit of work on iOS is to examine the iOS keychain – there’s a bunch of entries in there and I’m curious what they could be used for given temp ID token generation takes place serverside.

The post Analysis of the ABTraceTogether app (iOS) appeared first on forensicmike1.

]]>
https://forensicmike1.com/2020/05/01/analysis-of-the-abtracetogether-app-ios/feed/ 6
Examiner-coder-types: Learnin’git can make you a better developer https://forensicmike1.com/2020/02/29/examiner-coder-types-learningit-can-make-you-a-better-developer/?utm_source=rss&utm_medium=rss&utm_campaign=examiner-coder-types-learningit-can-make-you-a-better-developer https://forensicmike1.com/2020/02/29/examiner-coder-types-learningit-can-make-you-a-better-developer/#respond Sat, 29 Feb 2020 18:37:46 +0000 https://3.88.229.156/?p=444 I decided to write an article about Git and Github. Why? I’ve been exposed to Git a lot since I started working for a software company. Now, I’m wishing I could go back in time and have used it a lot more… even for projects I had no intention of ever releasing to the public […]

The post Examiner-coder-types: Learnin’git can make you a better developer appeared first on forensicmike1.

]]>
I decided to write an article about Git and Github. Why? I’ve been exposed to Git a lot since I started working for a software company. Now, I’m wishing I could go back in time and have used it a lot more… even for projects I had no intention of ever releasing to the public (e.g. most of them!). This article is meant to speak directly to those just like “pre-software company me” — who might be missing out on the significant advantages of using Git. Ask yourself the following:

  • Are you self-taught in most of the programming you know?
  • Have you ever made a change to your project and subsequently wished you could undo said change as it created unforeseen bug(s)?
  • Have you tried to use git before and been frustrated with some of the very first steps you need to take, such as setting up a new repo from an existing project?
  • Do you keep all of your code in a cloud drive like Dropbox or Google Drive to ensure you have consistent code backups?
  • Have you ever struggled to work collaboratively with another developer on a project?
  • Are you aware of git, and the fact that it is popular and supposedly useful, but been overwhelmed by the initial learning curve?

If you identified with any of the above, this article might help you. If you already know why Git / Github are amazing, you could still benefit from some of the tips below. This article isn’t so much a “git 101” as there are lots of great resources out there for that. I may write more on this subject in future though!

Disclaimer

I am not a git master. I am a git neophyte, and that’s okay. You don’t need mastery to reap the benefits. Since I started working at a software company, I’ve met some true git wizards, and have benefited greatly from their willingness to impart wisdom.

I’m only just getting started in my Git-learning journey, and I do mean journey — it’s a slough that can honestly be pretty frustrating. (Out of those times of frustration often come some of the biggest Git-epiphanies).

Despite my relative newness, I feel strongly enough about the benefits to write a post because I think there are a lot of folks out there that stand to benefit greatly from beginning to use this technology.

If you disagree with any of the advice I’ve laid out, think I should add to this article, or just want to chat Git – feel free to drop a comment or connect with me on Twitter/Discord.

Just /some/ of the advantages…

  • Redundant, cloud-based backups of JUST YOUR SOURCE CODE. Github now offers private repositories for free. This is SO much better than storing your entire code folder on Dropbox / GDrive / etc.
  • The ability to selectively roll-back changes or track specific changes over time.
  • Makes collaborative coding much easier if it ends up being a project that involves multiple developers.
  • Increases the likelihood of people trying out your product (vs. downloading an installer from a random website authored by someone they don’t know or trust)
  • If you have future aspirations of working at a software company, particularly in any kind of development role, pre-existing fluency in Git will be a huge asset.

While it does take time to learn, and is a skill that must be maintained, you will benefit greatly as a developer by learning it, and you don’t need to be a master to use it right away.

Setup a repo from an already existing project

If you’re like me, you may find that you have often already begun writing code before it ever occurred to you to setup a github repo. There are in-built features in Visual Studio, but I’ve always struggled to make them work with a project that already has code in it. All is not lost.

There’s a pretty good write-up here: https://help.github.com/en/github/importing-your-projects-to-github/adding-an-existing-project-to-github-using-the-command-line

But if the inner-millennial in you can’t bear to read the above tutorial start-to-finish, please at a minimum keep in mind the following KEY POINT:

When creating your repo on Github, DO NOT create a readme or set a license. You want an entirely empty repo. E.g.

As long as you do that, things will go smoothly in the tutorial. I’d also recommend running a “git status” command before you proceed with your first commit, and ensure you don’t see “bin/” or other such folders. If you do, read the next section first.

Before doing your initial commit, create a .gitignore

.gitignore is a file that lists off what should and shouldn’t go into your commit. In general, it is considered bad practice to include stuff that gets generated by your compiler in your commits. Another no-no would be including dependencies like NuGet packages as this will massively inflate the size of your repository unnecessarily.

Of course, you are welcome to research what a good .gitignore looks like, but I would personally recommend checking out Gitignore.io . This site provides templates depending on the type of project you’re working on. For instance, if you are working on a C# project you can search for CSharp and get yourself a ready to go .gitignore:

You can then save this as .gitignore in the root of your repository and the next time you run “git status” you’ll see that folders like:
bin/
obj/
.vs/
packages/

Are all absent from the change list. This means you will only be committing the important stuff to github and any would-be repo cloners will be pleased with you.

Make a nice readme.md

Readme.md is a critical piece of any good git repo. It is one of your only ‘sell me’ pitches that will help your potential user decide whether or not to going to use this repo. It’s a great place to include things like:

  • Explain what the product is today and where it came from. Also where you envision it in future. (Current & Future features / roadmap)
  • Give direction on how you would like potential issues reported, or whether you are open to others submitting pull requests to your repository.
  • Provide links to pre-compiled binaries, if you decide to provide them.
  • Shout out any third party libraries you are using in your project.

If your goal is to increase exposure, consider adding one or more screenshots to show off your product.

The “md” in readme.md stands for Markdown. Generous (but not gratuitous) use of markdown-based formatting will add polish to your repo. If you aren’t sure how to start with markdown, check out Github’s help page on the subject: https://help.github.com/en/github/writing-on-github/basic-writing-and-formatting-syntax

Tip: If you have setup your repo without a read me (such as when you are creating a repo from an existing project) you can add a readme off your main repo page:

Click the green button

Choose the right license

If you’re planning to make your repo public at any point, it is important to not only specify a license, but also understand the implications of the license you ultimately select. Github has made this process easier by providing summaries of what each license can do. You might wonder which license is “the best” – and the truth is this really depends on what your project is intended to do.

I’m not an expert in OSS licensing by any stretch, but I definitely see the value in learning the differences between them depending on how widespread of use you anticipate your project seeing.

For example, if you are writing a new filesystem, codec, parser or algorithm which you envision software companies to potentially adopt the use of your code in the future. GPL licenses are designed to force anyone who uses the code to have their code be open source as well.

If on the other hand, you are writing a full fledged app and you’d like to avoid having your stuff plagiarized, GPL might be the perfect solution.

For additional reading on licenses, I recommend the following resources:

How to add a license after the repo is already setup

You’ve setup your repo but now you need to add a license. How do we do this?

From your repository’s homepage on Github, click “Create New File”. As soon as you type in the word license you will see a button appear called ‘Choose a license template’ – Click it!

From here, you can select different licenses (the popular ones like Apache, GPL, and MIT are listed at the top) and review their differences. Once ready, click ‘Review and submit’.

Finish every coding session with a commit or PR

There are lots of different considerations here, such as whether your repo is public or private, whether you are collaborating with others, etc. The easiest of all is to just commit directly to master, which might be fine if it’s a private repo and you’re the only one contributing.

The point is, in order to maximize the ‘redundant backup’ benefit, you need to make sure that any code you write exists somewhere other than your local machine. Don’t make the mistake of thinking that you should wait until you’ve finished an entire topic worth of code changes to push them. If you’ve made any code changes at all you wouldn’t want to have to re-write, make sure you propagate those changes to Github somehow. Remember: git commit isn’t git push. Running “git commit” won’t actually backup your changes to Github. Keep an eye on the output of your git commands as it’s usually pretty clear when your code has gone out to the internet:

Conclusion

Git has a learning curve, and isn’t something you can master in a week. But the sooner you start getting exposed to it the better.

The post Examiner-coder-types: Learnin’git can make you a better developer appeared first on forensicmike1.

]]>
https://forensicmike1.com/2020/02/29/examiner-coder-types-learningit-can-make-you-a-better-developer/feed/ 0
Spice up your forensic web reports with UI Frameworks https://forensicmike1.com/2020/01/03/spice-up-your-forensic-web-reports-with-ui-frameworks/?utm_source=rss&utm_medium=rss&utm_campaign=spice-up-your-forensic-web-reports-with-ui-frameworks https://forensicmike1.com/2020/01/03/spice-up-your-forensic-web-reports-with-ui-frameworks/#respond Fri, 03 Jan 2020 19:07:21 +0000 https://3.88.229.156/?p=413 I wanted to blog about a subject that’s come up in a number of converations recently- that is the idea of spicing up web reports spit out by scripts by making use of UI frameworks (which are generally free but may also have paid options if things get serious!). Like many examiners, I had some […]

The post Spice up your forensic web reports with UI Frameworks appeared first on forensicmike1.

]]>
I wanted to blog about a subject that’s come up in a number of converations recently- that is the idea of spicing up web reports spit out by scripts by making use of UI frameworks (which are generally free but may also have paid options if things get serious!). Like many examiners, I had some familiarity with web development but hadn’t necessarily kept up with the trends over the years. It wasn’t until fairly recently that I finally invested the time in learning to build using Bootstrap, and I haven’t looked back.

While I still do prefer to write most of my HTML by hand in Notepad++, I discovered a profoundly easy to learn thing – UI frameworks. By simply adding in a couple of these resources into your project – we’re going to look at Bootstrap and FontAwesome today – you can really make your reports stand out with very little effort. Users genuinely appreciate the time you take to include things like icons, and thanks to FontAwesome this can be a very easy thing to do.

Import Bootstrap + Dependencies

First things first, you need to bring in Bootstrap. If you’re okay with having an internet requirement for your report, you can use a CDN (Content Delivery Network). For simplicity, we’ll use this approach today. If you want it to work offline, you’ll need to download the frameworks including dependencies.

In the <head> section of your webpage, add the following:

<!-- Bootstrap core CSS -->
    <link href="proxy.php?url=https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
    <link rel="stylesheet" href="proxy.php?url=https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.11.2/css/all.min.css" integrity="sha256-+N4/V/SbAFiW1MPBCXnfnP9QSN3+Keu+NlB+0ev/YKQ=" crossorigin="anonymous" />

Then, before the closing </body> tag, add your scripts- notice we are adding jQuery and Popper (dependencies) as well.

<script src="proxy.php?url=https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js" integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo=" crossorigin="anonymous"></script>
    <script src="proxy.php?url=https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.7/umd/popper.min.js" integrity="sha384-UO2eT0CpHqdSJQ6hJty5KVphtPhzWj9WO1clHTMGa3JDZwrnQq4sF86dIHNDz0W1" crossorigin="anonymous"></script>
    <script src="proxy.php?url=https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.4.1/js/bootstrap.min.js" integrity="sha256-WqU1JavFxSAMcLP2WIOI+GB2zWmShMI82mTpLDcqFUg=" crossorigin="anonymous"></script>

And that’s it! Now you’re ready to Bootstrap/FontAwesome.

What is FontAwesome and how do I use it?

FontAwesome is a library that provides some 1500 icons free of charge for web developers out there to use in their projects.

The easiest way to get started with FontAwesome is by using their website. You can search the icon library here and any of the icons that aren’t faded out are free to use. In the example, let’s say I decide I want to use the “tags” icon. To use it in my web app, I simply write this code:

<i class="fa fa-tags" aria-hidden="true"></i>

So our project now looks like this:

While we’re here. Let’s talk Bootstrap for a moment. In my code, you can see that I wrote in a custom style of “margin: 12px;” for my enclosing <div> element to avoid having it look like this:

But what if there was a better way? With Bootstrap, instead of defining a manual style – which may not look the same on all devices, we can use a set of predefined classes to describe to the Bootstrap framework how we want our layout to work. I can do this using the following HTML:

<div class="mx-2 my-2">
    Well hello, FontAwesome! <i class="fa fa-tags" aria-hidden="true"></i>
    </div>

So what’s happening here? We’re leveraging Bootstrap’s spacing tools, which are abbreviated for us. In this case, there are two: “mx-2”, which means Margins on the X axis (left/right sides) and Y axis (top/bottom sides) both set to 2. I could manipulate this from 0 to 5 to increase or decrease the margin as required. You can also select specific margins like “ml-3” (for Margin Left, 3) and so on. To manipulate padding, swap the M with a P – as in “px-2”.

To go back to FontAwesome for a moment — one thing that’s really neat about these icons is that they respond to changes via CSS. So you can easily change your icon to be whatever color (or size) you need. This is where the “font” part of FontAwesome comes in. If I change my tag icon as follows:

<i class="fa fa-tags" style='color: orange;font-size: 32px;' aria-hidden="true"></i>

You can see that the result is a much larger, orange icon that still looks great!

More on BootStrap

Alright, so let’s dive a little bit deeper into BootStrap. Let’s try out the NavBar control for our page. I’m using the example directly from Bootstrap’s site here and plopping it into the HTML. Here’s my code:

<nav class="navbar navbar-expand-lg navbar-light bg-light">
        <a class="navbar-brand" href="proxy.php?url=#">Fancy Report</a>
        <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
          <span class="navbar-toggler-icon"></span>
        </button>
      
        <div class="collapse navbar-collapse" id="navbarSupportedContent">
          <ul class="navbar-nav mr-auto">
            <li class="nav-item active">
              <a class="nav-link" href="proxy.php?url=#"><i class="fa fa-home" aria-hidden="true"></i> Home <span class="sr-only">(current)</span></a>
            </li>
            <li class="nav-item dropdown">
              <a class="nav-link dropdown-toggle" href="proxy.php?url=#" id="navbarDropdown" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
                Dropdown
              </a>
              <div class="dropdown-menu" aria-labelledby="navbarDropdown">
                <a class="dropdown-item" href="proxy.php?url=#">Action</a>
                <a class="dropdown-item" href="proxy.php?url=#">Another action</a>
                <div class="dropdown-divider"></div>
                <a class="dropdown-item" href="proxy.php?url=#">Something else here</a>
              </div>
            </li>
          </ul>
          <form class="form-inline my-2 my-lg-0">
            <input class="form-control mr-sm-2" type="search" placeholder="Search" aria-label="Search">
            <button class="btn btn-outline-primary my-2 my-sm-0" type="submit">Search</button>
          </form>
        </div>
      </nav>
    <div class="mx-2 my-2">
    Well hello, FontAwesome! <i class="fa fa-tags" style='color: orange;font-size: 32px;' aria-hidden="true"></i>
    </div>

And result:

Looking pretty good right? I wanted to also point out this NavBar is 100% responsive out of the box, so it’ll look good on Desktop and Mobile with little to no effort from you.

One of my favourite things to do when drafting an idea for a report is to look at some UI examples made with your framework of choice. Bootstrap has these examples or alternatively you can just peruse their excellent component library docs.

This final example I’m going to show includes a few different components: cards with headers, buttons, and more from FontAwesome.

<div class="container">
          <div class="row">
              <div class="col-md-5">
                <div class="card mt-3">
                <div class="card-header h4">
                    <i class="fa fa-tags" aria-hidden="true"></i> Report Contents
                </div>
                <div class="card-body">
                <p class="card-text">Well hello, FontAwesome and Bootstrap! </p>
                <a href="proxy.php?url=#" class="btn btn-primary"><i class="fa fa-search" aria-hidden="true"></i> View Details</a>
                </div>
              </div>
            </div>
        </div>
    </div>

And result:

For more reading, I highly recommend looking through the online Bootstrap docs further to see many awesome examples of whats possible.

The post Spice up your forensic web reports with UI Frameworks appeared first on forensicmike1.

]]>
https://forensicmike1.com/2020/01/03/spice-up-your-forensic-web-reports-with-ui-frameworks/feed/ 0
KnowledgeC: Now Playing entries https://forensicmike1.com/2019/10/07/knowledgec-now-playing-entries/?utm_source=rss&utm_medium=rss&utm_campaign=knowledgec-now-playing-entries https://forensicmike1.com/2019/10/07/knowledgec-now-playing-entries/#respond Mon, 07 Oct 2019 18:57:12 +0000 https://3.88.229.156/?p=375 I know it’s been ages since I’ve posted! I have been settling in with Magnet Forensics and have to say – it’s been an incredible experience so far. I continue to be amazed and inspired by the dedication and skill of the folks who work tirelessly to make Magnet AXIOM and countless other products the […]

The post KnowledgeC: Now Playing entries appeared first on forensicmike1.

]]>
I know it’s been ages since I’ve posted! I have been settling in with Magnet Forensics and have to say – it’s been an incredible experience so far. I continue to be amazed and inspired by the dedication and skill of the folks who work tirelessly to make Magnet AXIOM and countless other products the absolute best they can be.

I was recently helping out a customer with a question about an iPhone he was examining. He wanted to corroborate the device owner’s story — allegedly he had watched some videos on the device at a certain date and time.

I suggested KnowledgeC “Now Playing” as a reference point and this led down a rabbit hole, namely:

  • Does clearing Safari history impact KnowledgeC.db?
  • Does private browsing affect input into KnowledgeC.db?

Answering these questions should be easy enough with the help of a jailbroken device (which I always keep near these days). I wanted to share my findings with the #DFIR community as there are some interesting things I observed along the way. Sarah Edwards herself noted in her blog series about KnowledgeC that there is more work to be done in terms of validating that the data is as it appears to be. I would say this work today follows down that path.

One other thing to note. My jailbroken device is running iOS 11.4.1 and at the time of writing we are at iOS 13.1.2, so there could be a difference between this and the latest/greatest iOS version. First things first, I went into Safari and visited the first video that popped up on YouTube (do not have the YouTube app installed so it played in browser).

Image result for blippi
I had no idea what ‘Blippi’ was until clicking the first random video that came up on YouTube.com as trending. Lesson learned.

Next, using SFTP I collected KnowledgeC.db from /private/var/mobile/Library/CoreDuet/Knowledge, including shm and wal, and opened DB Browser for SQLite. Next I ran Sarah Edwards’ Now Playing script (APOLLO) and here is what I observed:

So far so good. I’d concur with the data here that I made it through an ad and about 3 seconds of the Blippi video before feeling immense regret end hitting the home button to stop that madness.. By the way, Oct Edge Pre Roll is an Ad, which at some point I skipped… but I’d say 15 seconds is conceivable for how long that all took.

Next, I went back to my JB device and cleared all history through Settings > Safari. I then pulled KnowledgeC and ran the query again. Nothing changed- it was exactly the same as before.

Now things start to take a turn for the weird- I went to another video on Youtube within Safari and once again pulled my KnowledgeC db out:

Image result for hmmm emote

So…..the new video is missing altogether, but even more strangely there is an additional entry of Blippi (note the entry creation is about 5 minutes after the fact) stating a ‘Usage in Seconds’ of 319. (Note that the Usage in Seconds column is actually a computation of ZENDDATE – ZSTARTDATE that Sarah has provided for us.)

A few things we might surmise from this:

  • Even with Safari suspended and history cleared, if I were to lock my screen I suspect it would show my “Now Playing” of the Blippi video. It wasn’t until I went to a different video that it got changed.
  • KnowledgeC writes are not guaranteed to be immediate and definitely do not on their own reflect active viewing time.

I then watched the same video again and once again pulled my KnowledgeC. This time, I got the new entry as expected:

To answer the other question, as to whether or not private browsing makes a difference with respect to KnowledgeC Now Playing records. I then visited more YouTube videos in ‘Private Mode’ on Safari:

They showed up just the same.

One last note. After all of this I did a KnowledgeC-wide query to see what kind of imprint I left beyond the Now Playing results:

And there you have it. I think with /app/inFocus rows it is a much clearer picture of the fact that I did not in fact spend a lot of time watching any one video. The moral of the story here is that KnowledgeC data is indeed amazing, but not without its nuances. You must build your story based on the totality of ALL relevant KnowledgeC records, and avoid dwelling solely on the information derived from a single log type or row.

The post KnowledgeC: Now Playing entries appeared first on forensicmike1.

]]>
https://forensicmike1.com/2019/10/07/knowledgec-now-playing-entries/feed/ 0
Photo Vault app still pwnable in 2021? An adventure in iOS RE https://forensicmike1.com/2019/06/26/ios-photo-vault-app-still-pwnable-in-2019/?utm_source=rss&utm_medium=rss&utm_campaign=ios-photo-vault-app-still-pwnable-in-2019 https://forensicmike1.com/2019/06/26/ios-photo-vault-app-still-pwnable-in-2019/#comments Wed, 26 Jun 2019 19:40:46 +0000 https://3.88.229.156/?p=288 Update 2021/08/22: Thanks to a tip from a reader, it was brought to my attention that PPV iOS made some pretty big changes in a recent update (early August 2021 – version 11.9). In reading the release notes, as well as doing some of my own tests, I’ve discovered some stuff and wanted to touch […]

The post Photo Vault app still pwnable in 2021? An adventure in iOS RE appeared first on forensicmike1.

]]>
Update 2021/08/22: Thanks to a tip from a reader, it was brought to my attention that PPV iOS made some pretty big changes in a recent update (early August 2021 – version 11.9). In reading the release notes, as well as doing some of my own tests, I’ve discovered some stuff and wanted to touch on their impact.

Summary

  • Happy to say that across the board, almost all the changes improve security for PPV users. The only source of apprehension for me is the cloud based backup which I will discuss in more detail later in the post.
  • Bruteforcing the PIN is still possible although the time it takes to do so has increased, due to a larger keyspace.
  • With some effort put into reversing, it is still possible to decrypt the media, especially if you know the PIN or have access to the decrypted keychain.
  • For users who were using PPV before this version, my suspicion is that the app will continue to use the old way, in order to avoid re-encrypting all media. So don’t assume if you have data from 11.9 that it will be subject to the new approach.

Database location has changed, and is now encrypted.

  • The database can be found a directory up from its previous location, with the extension ‘.ecd’ (short for Encrypted Core Data). Despite this, it is fully compatible with SQLCipher viewing tools like DB Browser for SQLCipher.
  • This is a solid security upgrade to PPV as previously one could glean a lot of information about what the encrypted media might be by simply looking at the database (and observing things like album titles).

Media items are now protected using a unique key per item.

  • Formerly, once you had derived the ‘media key’, you were good to go for decrypting all data if you knew the structure of the cryptographic media container.
  • From a security perspective, this doesn’t change things a whole lot because of the fact that you still only need 1 key (the SQLCipher key) to get to the database. It does add extra steps though, and falls more in line with what some of the larger apps are doing (e.g. Snapchat).

Numeric passcodes can now be *up to* 8 digits long.

  • This is a long overdue increase of security for the app. 4 digit numeric keyspaces, even ones with a reasonably strong KDF backing them, are pretty much always going to be susceptible to bruteforce.
  • The ‘up to’ is significant as well. Instead of having a known length (4 digit) PIN (keyspace of 10^4), we now have any length from 4-8, e.g. (10^4+10^5+10^6+10^7+10^8)
  • Still, having any cap on the length at all seems unnecessary. Also having an option for custom alphanumeric would be nice to see.

There is now a ‘cloud backup’ option available (for a fee).

  • Full disclosure: I have not investigated this feature as of yet. These opinions are based on generalized concepts that would apply to any vault app with such a feature.
  • I will say that non-CloudKit (iCloud) based storage for an app like this, for me, is on its own reason enough to exercise extreme caution. This is not a reflection on PPV itself (I have no knowledge of the developer), but I have seen enough abhorrent things with other vault apps with this type of offering to default to alarm bells.
  • The patch notes clearly state that the separate ‘cloud password’ is never backed up to their server, but even without the password I would have a lot of questions about how strong of a KDF is being used on the password, what are the minimum password strength requirements, etc. Imagine a scenario where their backup server gets breached, and the only thing standing between the attacker and your most sensitive media is 10,000 rounds of PBKDF2-SHA1.
  • Beyond outsider threats, what guarantees do we have that people affiliated with PPV aren’t going to attempt to bruteforce our data? How big is this company, how many people work there, how many of those people have access to this server and what access controls and auditing is in place to monitor that access?
  • The fact that a paid subscription is required means that PPV will indirectly have access to a lot more PII of their users than they otherwise would, which could be used to associate media to a specific identity.
  • Does PPV have the resources to respond to legal orders, such as warrants or preservation orders? It is only a matter of time before CSAM gets uploaded to their server.

Update 2020/01/29: I have since done a bit more work with this app and have found a way to bruteforce the PIN without keychain access. I also created a Python based decryptor script (instead of the C# one attached to this post). Rather than make them publicly available, please contact me and I will be happy to share the scripts with you. You can do so on the DFIR Discord or Twitter @forensicmike1.


Original post: It’s been a while since I posted anything, and I suppose that’s a natural part of having a blog. I decided not to force myself to procure content and instead wait until I had something I really wanted to write about. And so here we are! In this article I’m going to talk about a process brand new to me until a few days ago. This has been an absolute blast to learn about, although I will admit it was frustrating at times.

This article focuses more on the outcome of my research, without dwelling too much on exactly how I got there. I am however planning a follow-up post with a whole pile of lessons learned as I think there are a lot of gotchas and overall frustrations that could very possibly be skipped.

Why target this app specifically?

com.enchantedcloud.photovault or “Private Photo Vault” (hereafter PPV) has been the subject of security research before. In November 2015, a detailed breakdown was published by Michael Allen at IOActive and he found that the app didn’t actually encrypt anything! It’s security amounted to blocking users from seeing any media inside until the passcode had been entered and this was extremely easy to defeat. I figured revisiting this same app in 2019 could be fun/interesting just to see how far it has or hasn’t come since then.

Key Takeaways

Whether you consider this app secure or not depends on what kind of access you’ve got to various extraction methods. For examiners with filesystem type extractions (GrayKey / Cellebrite CAS / jailbroken devices), the security of PPV is trivial to defeat and I will demonstrate how below. For examiners obtaining logical type extractions (iTunes backup, UFED 4PC, Magnet ACQUIRE, etc.) decryption will be more challenging and further reversing work will be required. I do believe it is possible though.

PPV uses RNCryptor, an encryption library with implementations available in ObjectiveC, C#, JS etc. RNCryptor is open source and we can absolutely use that to our advantage. One thing RNCryptor doesn’t manage is key storage, and the developer of PPV has apparently decided to rely on the security of the iOS Keychain to store, well, everything we need to perform decryption.

The master key is stored in the keychain under “ppv_DateHash”. The plaintext PIN, which is a maximum 4 digits, is also stored in the keychain as “ppv_uuidHash1”.

Each encrypted media file (found with its original in the app’s sandbox at /Library/PPV_Pics/) is essentially a container. The first two bytes can be safely ignored, the next 16 bytes are the IV (Initialization Vector), and the remaining bytes are the cipher text with the exception of the last 32 bytes which are related to HMAC and can safely be ignored.

Once generated, the master encryption key never changes even if you change your PIN. This might seem like a poor design choice, but it’s actually how your iPhone works too and it can be quite secure as long as the master key is well protected. Secure Enclave makes sure that this key never sees the light of day but this is not true for keychain data.

Basic Outline of the Process / Tools Used

  • Locate and jailbreak test iOS device (I used Electra root for my test device, an iPhone 6S running iOS 11.2.1).
  • Installed PPV (target app) by sideloading with Cydia Impactor (app store works too).
  • Setup access over USB with ITNL (iTunnel) and obtained root access to device via SSH.
SSH tunnel over USB thanks to itnl.
  • Installed and verified operation of frida-server on the device – I did this using Sileo but should be doable via Cydia as well.
  • Used frida-ios-dump by AloneMonkey to obtain decrypted binary of the target app (recommend Python 3.7)
  • Conducted static analysis of decrypted binary using Hopper . I had great success with searching for a value from the plist I believed to be associated to crypto. This app is not free but the trial is fully functional for 15 minutes – make sure you hurry! πŸ™‚
Static analysis using Hopper – this class looks like it might be of use!
  • With my newly discovered knowledge I fired up Frida with this little gem: ObjC Method Observer, an awesome codeshare script by mrmacete (@bezjaje) to snoop on iOS method invocations of a specific class on a live device. (I targetted LSLCrypt and RNCryptor classes on PPV)
Note the test passcode of 1234 at the end of the giant SHA256 string.
  • Switched back and forth between Hopper and Frida console until I established a good idea of what was going on. The biggest breakthrough here was that the encryption key doesn’t change when you change the passcode, and that it is stored in keychain.plist
PIN change does not affect our encryption key, which conveniently gets stored in this device’s keychain.plist
  • Studied the RNCryptor-objc github repo to develop an understanding of how this AES wrapper works.
  • Develop PoC in C# using the amazing LINQpad to decrypt media in PPV_Photos given the keychain.plist

Decryption PoC

This script is C# and was written in/for Linqpad, but could be adapted to a Visual Studio project very easily. It uses only native libraries. You will need to plugin your AES Key as base64 in the “USER CONFIGURATION REQUIRED” section πŸ˜€ ! I call this a PoC because it does zero error checking and may or may not work for you without tweaking.

I might throw together a GUI app to do this more easily if people would use it. DM me on Twitter or Discord and let me know if that sounds interesting/useful.

void Main()
{
	// USER CONFIGURATION REQUIRED --------------------------------->
	
		// The input directory should point to the PPV sandbox where all the encrypted media resides
		var pathToEncryptedFiles = @"c:\ppvtest\335CE0B0-..-B521433DD5D2\Library\PPV_Pics";
		
		// Where to spit out the decrypted media
		var decryptFilesTo = @"c:\ppvtest\out\";
	
		// from keychain.plist -- genp with key "ppv_dateHash"
		var aesKeyb64 = "mUAf0A6QF+DOoo...7tbZuqw2ImuRAkql0mY0zM=";

	// END USER CONFIGURATION REQUIRED !!!
	
	Directory.CreateDirectory(decryptFilesTo);

	
	// Convert to byte[] from base64 string
	var aesKey = Convert.FromBase64String(aesKeyb64);
	
	// Iterate encrypted files in the PPV_Pics folder.
	foreach (var item in Directory.GetFiles(pathToEncryptedFiles))
	{
		var inputData = File.ReadAllBytes(item);
		// The IV is located at offset 0x2 and is 16 bytes long.
		var iv = inputData.Skip(2).Take(16).ToArray();
		
		// Our header is 18 bytes (0x0 for version, 0x1 for options, and 0x2 for 16 bytes IV)
		var headerLength = 18;
		
		// The cipher text is the rest, minus 32 which is used for HMAC stuff.
		var cipherText = inputData.Skip(headerLength).Take(inputData.Length - headerLength - 32).ToArray();
		
		File.WriteAllBytes(decryptFilesTo + new FileInfo(item).Name, decryptAesCbcPkcs7(cipherText, aesKey, iv));
	}
}

// Borrowed from Rob Napier's RNCryptor-cs
// https://github.com/RNCryptor/RNCryptor-cs
private byte[] decryptAesCbcPkcs7(byte[] encrypted, byte[] key, byte[] iv)
{
	var aes = Aes.Create();
	aes.Mode = CipherMode.CBC;
	aes.Padding = PaddingMode.PKCS7;
	var decryptor = aes.CreateDecryptor(key, iv);


	byte[] plainBytes;
	using (MemoryStream msDecrypt = new MemoryStream())
	{
		using (CryptoStream csDecrypt = new CryptoStream(msDecrypt, decryptor, CryptoStreamMode.Write))
		{
			csDecrypt.Write(encrypted, 0, encrypted.Length);
			csDecrypt.FlushFinalBlock();
			plainBytes = msDecrypt.ToArray();
		}
	}

	return plainBytes;
}

Acknowledgements

I’d like to thank the following people for their assistance on this research project:

  • Braden Thomas (@drspringfield) at Grayshift for his always spot-on advice and extensive depth of knowledge on all things iOS.
  • Ivan Rodriguez (@ivRodriguezCA) for his excellent blog and great advice.
  • @karate on DFIR Discord (Magnus RC3 Sweden) (@may_pol17) for his excellent guidance and urging to get Frida working.
  • Or Begam (@shloophen) from Cellebrite for reviewing my decryption PoC and spotting that final bug, connecting me with Ivan Rodriguez and generally being awesome.

The post Photo Vault app still pwnable in 2021? An adventure in iOS RE appeared first on forensicmike1.

]]>
https://forensicmike1.com/2019/06/26/ios-photo-vault-app-still-pwnable-in-2019/feed/ 14
A lesson in home network security https://forensicmike1.com/2019/05/19/a-lesson-in-home-network-security/?utm_source=rss&utm_medium=rss&utm_campaign=a-lesson-in-home-network-security https://forensicmike1.com/2019/05/19/a-lesson-in-home-network-security/#respond Sun, 19 May 2019 21:04:09 +0000 https://3.88.229.156/?p=199 There’s been a lot of buzz about RDP vulnerabilities of late, and one tweet in particular publicly shamed companies who in 2019 were still using port forwarding to remotely access machines on their corporate LANs. I thought, they’re talking about companies, not regular joes. But the tweet stuck with me and eventually motivated me to […]

The post A lesson in home network security appeared first on forensicmike1.

]]>
There’s been a lot of buzz about RDP vulnerabilities of late, and one tweet in particular publicly shamed companies who in 2019 were still using port forwarding to remotely access machines on their corporate LANs. I thought, they’re talking about companies, not regular joes. But the tweet stuck with me and eventually motivated me to take a small step towards improved security. I’ll admit it! I’ve had a random (non-3389) port forwarded to a machine on my LAN to facilitate RDP connections for some time, really just for the sheer convenience and cost effectiveness of it. I selected a port that isn’t commonly used for anything, to help prevent it from showing up on Shodan or any common-only type port scans. It gave me a ‘security through obscurity’ level of confidence that others probably share about their home LANs. I googled easy ways to improve RDP security and came across this guide which walks you through how to set local policy that automatically locks accounts out after so many failed login attempts. I went ahead and set this up on the box I RDP to. On that box, I had only 1 local account which was part of the Administrator group. Today, I went to login to the machine and got this (image from Google images):

OK, I thought, there could be a non-scary explanation for this — maybe a scheduled task I created with stored credentials and completely forgot about or something?

Side note — when you are literally the only user that can login on a machine and get locked out, it’s a pain in the ass to fix! The login screen won’t let you login as any other user and even password resets do not unlock it. I booted to recovery and launched a command prompt, but wasn’t able to see my locked out account from there using net user — it listed other accounts but not the problem one. The fix I finally used was to replace utilman.exe with cmd.exe in the recovery command prompt and boot normally, then clicked the ‘ease of access’ to have an administrator command prompt on a normal boot, which was able to set the user account to active again.

Alright so, we’re unlocked now and all is back to normal, right? As a forensics guy I really wanted to discover what caused the lockout. I opened up Event viewer and checked out the Security logs. What I observed next stunned me. In the 12 days before all this happened, I had over 14,000 failed attempts to logon via RDP. Further inspection showed that the failed attempts were often coupled with random account names that could only be part of a dictionary attack. I exported the list and wrote a C# script to itemize the names used, and put them in a pastebin here if anyone is interested. ADMINISTRATOR is the clear winner with over 9611 failed logins.

This led me on a ‘security improvement’ rabbit hole that included:

  • Disabling port forwarding altogether on my router
  • Running nmap on my machine and finding ports open somehow.
  • Discovering that UPnP was enabled on my router and disabling it – really?
  • Going through the ASUS Security Checklist and turning everything green, such as changing my ‘admin’ username to something else, disabling WPS, enabling HTTPS-only access to the router, updating router firmware to the latest version, etc.

While there’s no evidence at this point of successful access to my machine, I felt like this was an excellent wakeup call. As for remoting, I am going to disable RDP and instead use a third party remoting service — one which allows me to use 2FA and has ‘login attempt from X.X.X.X’ notification emails, etc.

The post A lesson in home network security appeared first on forensicmike1.

]]>
https://forensicmike1.com/2019/05/19/a-lesson-in-home-network-security/feed/ 0
Obtain a logical dump of Signal data on Android with signal-back https://forensicmike1.com/2019/05/15/obtain-logical-signal-android/?utm_source=rss&utm_medium=rss&utm_campaign=obtain-logical-signal-android https://forensicmike1.com/2019/05/15/obtain-logical-signal-android/#comments Wed, 15 May 2019 20:11:38 +0000 https://3.88.229.156/?p=173 I’ve had a number of people asking for a walkthrough on this process so thought I’d make it into this week’s blog entry. It’s not a particularly technical process and I’m the first to admit doesn’t adhere to strict forensic fundamentals either. I recognize this and agree! This approach is certainly one of the last […]

The post Obtain a logical dump of Signal data on Android with signal-back appeared first on forensicmike1.

]]>
I’ve had a number of people asking for a walkthrough on this process so thought I’d make it into this week’s blog entry. It’s not a particularly technical process and I’m the first to admit doesn’t adhere to strict forensic fundamentals either. I recognize this and agree! This approach is certainly one of the last things to do on an Android device – once you’ve completed all other acquisition techniques – including potentially taking photos of the screen. You should also consider any potential repercussions of manipulating the device directly and be willing to speak to this down the road otherwise don’t do it!

We’ve slowly been forced to make concessions as forensic examiners as the technology evolves and with it, an increased difficulty in obtaining that pristine unaltered dataset we get with a write-blocked mechanical hard drive. As long as you’ve followed sound forensic processes and obtained as much data as possible without making any changes, I think it’s a great ability to possess — being able to export Signal data this way — given time is not always abundant and message data can be unpredictably supermassive. We’ve all had the experience of having to capture screen photos one by one, and let’s face it – it sucks. Worse, the data you get from screen photos is often less precise… perhaps times are rounded to the nearest minute, relative to the time of the moment it is being viewed, or not visible at all.

Enough with the disclaimer, where do we start?

First, remove any SD card in the device, place it in a bag or tape it to something with a label, and set it aside. Locate a blank SD card. We’ll use this temporary SD card to transfer off our backup data once it is prepared. I generally wait to insert the SD card until after the backup has been created.

Open the Signal application on the device. Go to settings via the ‘…’ button at the top right of the home screen. From here look for ‘Chats and Media’ and tap on that.

On the next screen, click the slider switch to enable Chat Backups. If it is already enabled, switch it off and back on. A new password is generated each time. NOTE: You may wish to turn this OFF after completing an extraction.

Enabling the slider switch will trigger a dialog with a numeric password on it. The passphrase is read from left to right, row by row, as if there were no spaces in it. Check the box. HIGHLY RECOMMEND TAKING A PHOTO vs. writing it down.

After the program has run, the original screen will update with a new last backup date. Go back to the Home screen and locate File Manager app. On the device root (not the SD card), locate the folder called Signal. It will be empty aside from your newly generated backup. Now put in your blank SD card. Assuming all goes well and it gets mounted, long hold on the Signal folder and then chose ‘Move To’ from the context menu.

I usually choose to move it to the blank SD card, so it isn’t left behind on the device. Transfer this to your examination machine and copy it out. If you were to look at this in hex, you’ll see what you expected to see – an encrypted container file.

Now we need to use signal-back. This app is written in Go, and open source, but has been conveniently bundled into an executable that you can download off it’s Github page at xeals/signal-back. I’ve got this executable in a folder that’s in my PATH environment variable but you could copy it into the casefolder if you like. The command syntax is:

signal-back.exe format signal-2019-01-01-01-30-22.backup > signalMessages.xml

After this you will be prompted for the password which is not echoed to the screen. If you get a long error or anything to do with a parsing error you may have a password issue – try again. Alternatively if everything was successful you now have an XML file that is compatible with SMS Backup and Restore.

Throw this data into a compatible tool and presto! Signal data! One last note, contact names aren’t present in the XML. I don’t know if the Signal backup database includes it or not, but the way I deal with this is by exporting all Native contacts using a forensic tool and apply it to the XML based on phone numbers. You could also do this manually.

The post Obtain a logical dump of Signal data on Android with signal-back appeared first on forensicmike1.

]]>
https://forensicmike1.com/2019/05/15/obtain-logical-signal-android/feed/ 3
forensicBlend: Designing a scalable community plugin API https://forensicmike1.com/2019/05/11/forensicblend-designing-a-scalable-community-plugin-api/?utm_source=rss&utm_medium=rss&utm_campaign=forensicblend-designing-a-scalable-community-plugin-api https://forensicmike1.com/2019/05/11/forensicblend-designing-a-scalable-community-plugin-api/#respond Sat, 11 May 2019 14:09:10 +0000 https://3.88.229.156/?p=149 I decided to start writing this series to document my work on forensicBlend, a project I previewed on Twitter yesterday that takes device logs and translates them into a modern report format that can be searched, filtered, and exported. One of my fundamental design goals is to provide a high level of extensibility and allow […]

The post forensicBlend: Designing a scalable community plugin API appeared first on forensicmike1.

]]>
I decided to start writing this series to document my work on forensicBlend, a project I previewed on Twitter yesterday that takes device logs and translates them into a modern report format that can be searched, filtered, and exported. One of my fundamental design goals is to provide a high level of extensibility and allow community developers to contribute. That is, I want for people who know how to script to be able to contribute their own custom logic and see it work in my apps (and ultimately, result in a better timeline tool). This is something Eric Zimmerman touched on in our interview last week.

I scaffolded some of the UX (above) based roughly on how I want this to work. Essentially- list the currently installed plugins, prompt for updates where available, and provide a way to browse the online repository to download additional plugins.

Side note: I’ve had some questions regarding what UI framework I am using. The above is a WPF app which uses the excellent MahApps.Metro and Material Design in XAML libraries. These are free offerings that you can use in your own WPF project to elevate your UI to the next level.

Requirements

There are really two areas of work here to think about: the plugin API itself (what to do with the packages once they are installed), and package hosting/redistribution. Here are a few overall design considerations and requirements I came up with:

  • As a lone developer, time and cost savings are a priority. If there are any wheels that have already been invented, don’t invent new ones unless the need be great.
  • It’s 2019 and it’s therefore important we take time to consider things like security. Packages containing plugins may have DLLs (more on this later) with code that will ultimately be executed by our app. This could (and should) be considered a potential attack surface. We can mitigate this with some of the following:
    • Packages should be signed and verified at every step of the way.
    • Community created plugins and updates will undergo a thorough, manual code review and testing before they are posted to the package library (think Apple’s App Store).
  • Plugin packages should have versioning capabilities and upgrading to the latest version should be as seamless as possible, being cognizant that not all users will have internet access.
  • Community created plugins will be managed centrally and approved prior to being posted to the online package library (think Apple App Store).

Hosting and Package Distribution

I knew going into a project like this that I wanted to use Amazon AWS. The cost effectiveness, sheer scalability, and all-around cool factor of using AWS made this an easy design decision. Off the top of my head, I expect to be using the following AWS components:

  • API Gateway
    • Create and administer web endpoints for the app.
  • Certificate Manager
    • Free SSL certificate!
  • CloudFront
    • Content Delivery Network (CDN) to ensure low latency, high speed access to data from anywhere in the world
  • Cognito
    • Complete User Account Management and Authentication
  • EC2 (Elastic Compute Cloud)
    • Host microinstance of some sort of RDBMS. Or perhaps we will try out a NoSQL solution DynamoDB for science / learning — and because DynamoDB has a permanent free tier option.
  • Elastic Load Balancing
    • Distribute incoming application traffic across multiple targets across several Availability zones.
  • Lambda
    • Provide the business logic for serving our REST API to answer questions like “What are all the plugins currently available and what is the latest version?”
    • Provide the business logic for facilitating and monitoring package downloads (users like to see download counts), potentially provide a ‘thumbs up / thumbs down’ interaction or possibly even comment
  • Route 53
    • DNS Registration
  • S3
    • Secure, encrypted, redundant hosting of the compiled packages themselves
    • Web front-end for users who choose to browse it this way.

Let’s keep in mind I’ve touched on less than 10 of the things AWS can do for you, whereas the actual list of things it can do is much, much longer. For most of us, the usage involved will be in or around the Free Tier, so basically what I’m saying is you can get all of the things above for NO COST. If you are reading this and going, “Why am I still renting web space like I did in 2005?” this is an excellent question. You may wanna migrate! There is also an irreplaceable feeling you get when you realize you are using the same exact same serverless environment as some of the largest of corporate juggernauts out there.

Packaging Technologies

Earlier I mentioned leveraging as many existing technologies as possible. Most every .NET programmer out there is familiar with the idea of NuGet. From NuGet themselves:

NuGet is the package manager for .NET. The NuGet client tools provide the ability to produce and consume packages. The NuGet Gallery is the central package repository used by all package authors and consumers.

So if I’m writing an app, and I want to bring in code from a library to perform a specific function, I can open the NuGet Package Manager in my development environment (Visual Studio shown below) and perform a search for the function I need. Then it’s one click to install and be off.

It goes even further by providing strong versioning, licensing, dependency tracking, and more. Behind the scenes, NuGet uses .NUPKG files which are containers that bake alot of this functionality in for us, and provide desirable things like package signing.

Since NuGet already does everything we need and then some, for free, why would we design our own solution from scratch? Seeing the recurring theme here?

Next steps

So we know we’re going to use NuGet as a package management solution, and we know we’re going to use AWS for community hosting and package distribution, but what about the actual code to load said plugins? This could be the most entertaining part, but also the most time intensive. For the purposes of development, I will need to look at how to extract and use content from a NuGet package at runtime.

I’m going to have to weigh the advantages of dynamic code compilation (source code compilation at runtime) vs. distributing pre-compiled binaries (DLLs) and simply loading them.

Stay tuned for the next article in the series where we will get coding!

The post forensicBlend: Designing a scalable community plugin API appeared first on forensicmike1.

]]>
https://forensicmike1.com/2019/05/11/forensicblend-designing-a-scalable-community-plugin-api/feed/ 0
Chatting .NET with Eric Zimmerman https://forensicmike1.com/2019/05/06/chatting-net-with-eric-zimmerman/?utm_source=rss&utm_medium=rss&utm_campaign=chatting-net-with-eric-zimmerman https://forensicmike1.com/2019/05/06/chatting-net-with-eric-zimmerman/#respond Mon, 06 May 2019 12:58:18 +0000 https://3.88.229.156/?p=140 I don’t think anyone in the Digital Forensics world would dispute that Python is the most used language in forensic programming today. In fact, many of its more fanatical followers frequently remind us of its ostensibly long list of superior characteristics. To the extent I think sometimes people might forget that there exists other programming […]

The post Chatting .NET with Eric Zimmerman appeared first on forensicmike1.

]]>
I don’t think anyone in the Digital Forensics world would dispute that Python is the most used language in forensic programming today. In fact, many of its more fanatical followers frequently remind us of its ostensibly long list of superior characteristics. To the extent I think sometimes people might forget that there exists other programming languages at all. Recognizing this, I knew I wanted to write a post discussing one of my favorite technologies — C# and .NET as a whole — but I could think of no better guest contributor to bring into that conversation than Eric Zimmerman, one of if not the most household name in forensic coding, and and a staunch supporter of the tech.

Eric is the mastermind behind KAPE, Registry Explorer, JumpList Explorer, AmCacheParser, and so many more. Like many readers, I was introduced to Eric’s work early on in my forensics career – right at the beginning, in fact, as part of the curriculum of my “forensics 101” course at the Canadian Police College. I am honored to chat with him about one of my favorite subjects!

forensicmike1: Thanks so much for taking part in this conversation Eric! I am curious to hear what brought you into the .NET world initially, and what is it that’s kept you there for all these years?

Eric Zimmerman: I initially started my development career in Access. When I outgrew that, I moved on to VB6 (way back in the pre-.NET days). Once .NET came out, I slowly switched to VB.NET because I already knew VB. I always wanted to do C#, but did not want to have to re-learn thing so I held onto it for a long time. In fact, osTriage v1 and 2 were both written in VB! Soon after osTriage v2 came out, I decided to force myself into C# for a few projects and I have never looked back from that point.

So for me, it is a matter of wanting to use a first class language on the platform I deal with the most, which is Windows. I am a big believer in the concept of doing Windows forensics on Windows, Mac forensics on a Mac, and so on. You are just asking for issues when you do not do things this way. For example, a very popular method for accessing volume shadow copies for Windows does not, at least in some cases, present the data for access the same way as native methods in Windows does. This leads to corrupt files being exported and obviously, that’s a problem when it comes time to process them. Does this happen all the time? No, but even once is enough that I would be hesitant to trust that method in any case that matters, unless I also verified getting the data in exactly the same way from Windows natively. At this point however, you are now doubling your work, so why bother with the non-Windows method at all?

I stay with .NET because it’s what I know and what works for a wide range of needs. I know it’s not going anywhere, and it has great IDEs and other resources for efficient development, debugging, logging, and so on.

The other huge advantage is it’s range of 3rd party controls that just do not exist anywhere else for creating amazing graphical user interfaces (GUIs). Things like grids, tree views, and a ton of other controls I use in my GUIs aren’t available so I wouldn’t be able to write something like RegistryExplorer in Python — and if I did it wouldn’t do what it can do on the Windows side.


forensicmike1: I couldn’t agree more! And I’ve seen this happen over and over to people as they make their way to C#. Forensically speaking, can you think of any other advantages to writing code in .NET?

Eric Zimmerman: With .NET, I know the runtime I need is going to be in place by default — or will be in the vast majority of cases. I do not have to worry about making a self-contained executable, or not handling Unicode correctly, or not being able to install something where I need it.

Going back to what I said earlier, I feel you should do Windows forensics on a Windows box, so this makes things a lot easier for end users of my software. With my stuff you can download and unzip my programs on any machine and it will most likely work the first time without issues. This can be on a forensics box doing dead box work, or live response stuff against a running system in the field.

Speed is also a big thing for me. I tend to do a lot of work to tune my code so that it is, first and foremost, as accurate as it can be. Once this is done, I tune for performance. As the old saying goes, speed is fine, but accuracy is final. When you look at forensics programs written in other languages (Rust being an exception that comes to mind), the performance is often terrible and it takes a lot of work to get the environment ready to even run an application. Sure, the developer can do some work to package a Perl or Python program into a self contained Windows executable, but that process can be painful and it still does not address the performance issues. Can performant code be written in Python? Maybe, but it involves redoing parts in Cython, or writing critical sections in C++ and so on. So while it is possible, to me it’s just not worth it, especially in light of the issues I mentioned above. Getting accurate data is of course paramount, so even one time where you might not get that accurate data is one too many to take the chance.

When writing forensic tools that target Windows artifacts, what Windows does and says should be the target we aim for. If you can exceed what Windows lets you see and do, all the better. Shedding light on data in a different way is always a good thing, but not at the expense of excluding or missing things (or the risk of doing so).

At the end of the day, I would rather my code run amazingly well on one platform, than poorly on five platforms.


forensicmike1: Aside from not many people in forensics being familiar, can you speak to any disadvantages?

Eric Zimmerman: The funny thing about that is, most people are using .NET all over the place every day if they use a Windows box. Just because they may not be aware of it, does not mean it isn’t there.

I don’t really see any disadvantages for it in the tool chains I design and use, but obviously it has been an issue in the past of being able to run .NET code on non-Windows platforms. This is becoming less and less of an issue with Microsoft becoming more involved in the open source world — remember that .NET Core is open source now — and this is furthered by being able to run PowerShell on Linux too.

So at some point in the not so distant future, the code I write would be cross platform (atleast the CLI ones). In some cases, the code can already run on .NET core and Standard. The big hold up for me personally in this regard is that .NET Core and Standard do not have a seamless way to make a single executable for each platform. I hate distributing 38 DLLs and the executable for programs, so until I can do this on Linux or a Mac the same way I can on Windows (i.e. giving you a single executable to run) I won’t be doing cross platform stuff full time.

For a lot of people, the biggest hurdle people have when it comes to using .NET is not a technical one, but rather bias towards Microsoft or Windows for some reason. Given how easy it is to stand up a VM these days, excuses like “I can’t run X because it is Windows only” just shouldn’t be a valid excuse anymore.


forensicmike1: Do you think programming is a legitimate specialization within the field of Digital Forensics — or is it something every examiner should atleast dabble in at this point?

Eric Zimmerman: Well, I don’t know if it’s forensic programming that is a specialty, or the ability to program in a way that is necessary for use in the kinds of work we do in forensics that is more important. In other words, you do not have to be IN forensics to be able to look at programming in the way I am speaking of. What does this look like in practical terms? For me, it means failing early and often (i.e. NEVER, EVER eat error messages or other “unknown” conditions), programming defensively (i.e. protecting the end user from themselves to a degree), sanitizing input, providing the ability to see diagnostic and trace messages for debugging purposes, robust output options, and so on. (Forensicmike1: This is great advice and I hope some vendors are reading!)

Not everyone is wired to be able to program at higher levels and I am certainly no expert in the field. In fact, not even 10 years ago I started looking for a way to process LNK files natively in one of my live response programs. Looking at a LNK file in a hex editor, I said to myself “I would never be able to program something to read these things”, but now I have native parsers for just about every key Windows artifact out there — all of which I did in C#. I learned how to code and parse things partly out of necessity (they didn’t exist prior to my work) or because the existing tools did not do the job (incomplete, inaccurate, slow, etc) and I thought I could do better. Of course, curiosity and wanting to solve a problem comes into it too (I do not want to even think about how many hours I have spent looking at shellbags).

With that said, no one is expected to walk into DFIR and be able to write a forensic parser for an artifact on day one. In fact, most people just don’t have a reason to do so. It is certainly beneficial to have at least some level of proficiency with programming so you can whip up some code to automate the mundate though, so this is a good reason to atleast get familiar with something like PowerShell, C#, Python, etc., even if it is limited to looping over thousands of log files looking for things and saving yourself the pain of doing it manually.


In your view are the major forensic software vendors doing enough to provide ways for established developers who do forensics as a primary job to integrate their creations? If not, any thoughts on what they could do better?

Eric Zimmerman: This is a tough one because of the different languages vendors write their programs in. Does a vendor use .NET, C++, or Delphi? Each in turn would have different ways for external users to hook into it when writing code.

My suggestion to vendors is to provide the ability to write plugins that can be used by the vendor’s product. X-Ways for example has an API that let’s you write such things. Several of my tools do as well (plugins in Registry tools, maps in event logs, targets and modules in KAPE).
(Forensicmike1: Funny that the vendor that uses Delphi is also the only one who has done any .NET Plugin work!)

The other avenue is to come up with a non-programming means (or a balance of programming and non-programming) to interact with and extend programs. Things like maps in EvtxECmd or batch file mode in RECmd are good examples here. Both allow end users to wield the capabilities of tools and extend them as far as they see fit, all without me being involved.

I think the biggest benefit for end users is designing open ended and extensible tools that people can then take to places the developers never thought of before. It is pretty cool to hear about some of the use cases and ways people have put my stuff to use. They find all kinds of new uses and ways to do things I never envisioned when I designed the programs.

By doing this, it’s not about the author of the program anymore, but rather it’s about the end-user and making their job easier, the data more clear, the work more efficient, and so on. Letting the end-user reduce the noise in order to find the signal THEY want to find is what is important.


forensicmike1: Final word goes to you- Any advice for up-and-coming forensic coders who may be hesitant to share their work with the world?

Eric Zimmerman: Throw that code out there! Remember, there will always be a first for everything and you were not good at anything the first time you did it (or even the first 100 times!). Put that work out there, get it into people’s hands, let them play with it, make suggestions, break it, and so on.

Do not let anyone tell you anything in this space is a “solved problem” because the best way by far to learn about an artifact is to write a parser for it. And you never know, you may just find long standing bugs in major products that people have just taken for granted and assumed were right for the past 20 years.

Even if no one ever uses your code on a case, the fact that you created something from nothing is a great feeling. Seeing your code do what you intended it to do, seeing all your unit tests pass for the first time, seeing the output come out of a program you wrote from start to finish is a magical thing. It still excites me when I get into a new project.

Share that code, talk about that project, seek out the experts in your field to review and help and provide feedback. I cannot tell you how valuable peers are to bounce ideas off of, test things, and push my ideas to even better places. Two people (among many) that come to mind for me and have done these kinds of things hundreds of times for me over the years are David Cowen and Matt Seyer. Why are they in a position to do this? Because they too took that chance way back in the day to put out code, take a risk, be vulnerable, and EXPLORE THAT DATA in an effort to understand how it works, why it works, and the best ways to leverage that data to help us tell the story of what happened on a computer. As Matt and I like to say, “Every byte counts!”. There is a reason for them to be there. Seek to find out exactly why they are there.

So, in summation, my advice would be:

  • Take calculated risks.
  • Learn from your mistakes.
  • Leverage peers.
  • Move the ball forward.
  • Leave things better than you found them.

Follow Eric on Twitter @EricRZimmerman or visit his website at https://ericzimmerman.github.io/

The post Chatting .NET with Eric Zimmerman appeared first on forensicmike1.

]]>
https://forensicmike1.com/2019/05/06/chatting-net-with-eric-zimmerman/feed/ 0