Archive for the Notes to myself Category

Persistent GNU Radio Live SDR Environment on your Mac

Posted in Notes to myself, SDR, Security on 2014/10/19 by mram

Quick manual to get the GNU Radio Live SDR Envrionment to run on your Apple Mac.

The proces is the same as using the regular unetbootin tool and the ISO image. But for your Mac it taks a few essential extra stept. I couldnt find a manual online so perhaps this is of use for someone.

  1. Format your USB stick using Disk Utility. Create one new partition, using MBR boot partition layout (check Options). Make it FAT.
  2. From command line check what volume number the stick has using ‘diskutil list’. In my example its /dev/disk2
  3. Unmount volumes: ‘diskutil unmountDisk /dev/disk2’
  4. Make the newly created aprtition active:
    1. fdisk -e /dev/disk2
    2. f 1
    3. write
    4. exit
  5. Get syslinux
  6. Write the syslinux MBR image to the stick’s MBR: ‘dd conv=notrunc bs=440 count=1 if=syslinux/bios/mbr/mbr.bin of=/dev/disk2’
  7. Get unetbootin for Mac OSX
  8. Get GNU Radio Live SDR
  9. Start unetbootin, have it write to the USB stick with input file the downloaded iso.
  10. If you want persistent enter a size for persistence storage
  11. Mount the new volume and add ‘persistent’ to the boot parameter in the boot/grub/grub.cfg config file. I also removed the quiet and splash functions for better troubleshooting
  12. Reboot holding down the option key and select the disk to boot from.
  13. Profit.

WiFi Pineapple and Mac OS X Internet Sharing

Posted in Notes to myself, pentest, Security on 2014/10/03 by mram

Important: this approach does not seem to work since Mac OS X 10.10 Yosemite.

This one is for you Mac users out there that want to share your Mac’s WiFi internet connection via the LAN cable to the WiFi Pineapple. Using the out of the box Internet sharing option of your Mac doesn’t work with the WiFi Pineapple. I had experienced it again, but never gave it any good look and switched to Linux. Today I it frustrated me and I looked into it.

The problem with the setup is twofold: 1) The Pineapple expects the 172.16.42.0 subnet, while OS X uses 192.168.2.0 when enabling internet sharing, and 2) the Pineapple expects the default gateway on 172.16.42.42 which is not a very logical address for a gateway. Now, we could change all these settings on the Pineapple to match the Mac’s. But sometimes your situation may require different. I couldn’t find any manual on the internet. So here are the steps you need to do:

  1. Disconnect cables from Mac’s LAN to Pineapple.
  2. On the Mac go to Internet Sharing and share your WiFi adapter to the LAN interfaces. Once enabled, disable it again and close the System Preference program. We need this step to write a default config file that we can alter.
  3. The config file that we need to alter is /Library/Preferences/SystemConfiguration/com.apple.nat.plist We need to add an option “SharingNetworkNumberStart 172.16.42.0”. You can manually add this as a dict at the end of the file, or you can use the command “sudo defaults write /Library/Preferences/SystemConfiguration/com.apple.nat NAT -dict-add SharingNetworkNumberStart 172.16.42.0″. This makes sure that 172.16.42.0/24 is now used as the subnet for the sharing interface, and as such fixes our first problem.
  4. Use the GUI again to start Internet Sharing.
  5. Manually change the IP address used by the Mac’s LAN interface with the command “ifconfig bridge100 172.16.42.42 netmask 255.255.255.0 up”.
  6. Now we need to change some DHCP options, because by default the DHCP server tells the clients to use gateway 172.16.42.1. We do this by altering file /etc/bootpd.plist. There are two mentions of 172.16.42.1 that we need to change into 172.16.42.42. We also need to adjust the pool range. Look for the <key>net_range</key> section. Alter the starting address to 172.16.42.43.
  7. Find the PID of the bootpd process and give it a kill -HUP to reread its config file.

That’s it. Now you can connect the LAN cable and enjoy internet from your Pineapple.

SSLsplit on WiFi Pineapple

Posted in Notes to myself, pentest, Security on 2014/07/26 by mram

Update: after this blogpost somebody made an Infusion for SSLSplit on the WiFi Pineapple. That’s great! You can still use the howto below, but the easier way is to install the Infusion via the Pineapple bar.

Recently I was asked by a client to do a penetration test on one of their mobile apps. Fun stuff. One of the things I always test is security of the communication channel. Often SSL over HTTP is used for that. The WiFi Pineapple is a great companion for this as it provides an easy way for setting up a wireless access point with some attacks on the communication, leaving your own pentest machine free for other attacks.

Default approach to analyze traffic is to become Man-in-the-middle between App and server it communicates with. This is easily done by configuring the mobile device with a proxy (if the App communicates via a proxy aware protocol and if it accepts the system proxy settings) or to redirect traffic using iptables on the Pineapple. Than have Burp or any other proxy tool run to intercept and modify the traffic. Nothing new here.

But what was special at this specific engagement was that Burp (or any other proxy tool I know) was unable to interpreter the traffic. Yes, the iptables redirection was working, yes the SSL-mitm worked without a prob. Burp showed the initial request, and wireshark showed the traffic being forwarded to the actual server the App wanted to communicate with. But nothing was happening after that. No data, nothing. After some tinkering the hypothesis was formed that the App used non HTTP traffic over SSL and our proxy tools don’t understand it.

This is where I learned about this great tool SSLsplit. Its a proxy tool able to do full SSL certificate forging, full HTTPS decode, but also able to just show the decoded TCP and SSL traffic if it cant decode it into HTTP. Exactly what I needed! I had some compiling issues getting it to run on my Kali pentest machine. Im sure these could be fixed but I just tried installing it directly on the Pineapple. Turned out it works like a charm. Here is what you need to do:

  • SSH to your Pinapple and update the packages using opkg update
  • Get the OpenWRT libevent2 packages (all 5) from the official mirror at http://downloads.openwrt.org/attitude_adjustment/12.09/ar71xx/generic/packages/
  • Download the unofficial OpenWRT build of SSLsplit for OpenWRT at project Ghost on Github: https://github.com/ShaPOC/ProjectGhost/blob/master/software/sslsplit/bin/sslsplit
  • generate the SSL certificate authority and key for SSLsplit to use.
    • openssl genrsa -out certificate.key 4096
    • openssl req -new -x509 -days 365 -key certificate.key -out certificate.crt
    • Depending on the config of the mobile App you may need to import the newly generated certificate.crt onto the device.
  • Know what non intuitive parameters SSLsplit requires:
    • mkdir /tmp/sslsplit (make a working directory)
    • mkdir /tmp/sslsplit/contentlog (make a directory for session logs inside the working directory)
    • ./sslsplit -k certificate.key -c certificate.crt -D -l connections.log -S /tmp/sslsplit/ -L contentlog ssl 0.0.0.0 8888
    • This starts sslsplit with:
      • using the cert authority we just created, used for certificate forging
      • debug output to the main screen (I found this useful, you may not)
      • working dir /tmp/sslsplit, duping the actual content of the connections to /tmp/sslsplit/contentlog/
      • decoding traffic that comes in a port 8888 as ssl
  • Redirect the traffic we want to analyze to port 8888, with a simple iptables script
    • root@Pineapple:~# cat pineburp_split.sh
      #!/bin/sh
      echo ‘1’ > /proc/sys/net/ipv4/ip_forward
      iptables -X
      iptables -F
      iptables -t nat -F
      iptables -P INPUT ACCEPT
      iptables -P FORWARD ACCEPT
      iptables -P OUTPUT ACCEPT
      iptables -t nat -A PREROUTING -p tcp -d @@SPECIFIC_DEST_IP@@ –dport 443 -j REDIRECT –to-ports 8888 (watch it, parameters –dport and –to-ports are double dashes but for some reason WordPress displays them as one).
      iptables -t nat -A POSTROUTING -j MASQUERADE
  • Start your app and see if it accepts the SSL certificate. In my case it did (bad for the App, good for the pentester) and the content was dumped on the pineapple in /tmp/sslsplit/contenlog with a file per TCP sessions.

Full SSL decode. Awesome!

What hardware to choose when building a GPU based password cracker right now (Q1 2012)?

Posted in Notes to myself, Password cracking on 2012/02/06 by mram

GPU based password cracking has unmet power when brute force cracking. Although brute force cracking is only part of the game (see also my over a year old post on CPU based cracking not being dead here) any modern security testing lab includes GPU password cracking functionality.

The field of GPU hardware is heavily in development. What was top of the line 18 months ago is somewhat reasonable right now. As I’m the process of upgrading the GPU hardware in our security testing lab myself, I just researched several possibilities with the current state of GPU hardware taken into account. This may be different in a few months, but for now (Q1 2012) these are the best picks I could find. And I thought to share them with you.

I narrowed it down to four different options, ranging from a few hundred to 13.000 Euro.

Common decisions for all possible options

Before diving into the different options, let’s discuss a few main decisions that are the same for any way you go.

Power is not really an issue when you can combine power supplies

GPU cards consume a lot of power. Having several GPU cards in your box requires a massive PSU. We are talking 1200+ Watt here when having a few modern cards. High Watt PSUs are expensive, especially when you want  ’80 PLUS’ certified – you want these as these are guaranteed to require only 20% of extra Watt from the power outlet to reach the advertised amount of Watt, these extra 20% are transformed into heat, the byproduct of any PSU. But as you do consume a lot of power you do need a big – and therefore expensive – PSU. Fortunately there are easy solutions to combine several mid range PSUs into the PSU of your requirements. ADD2PSU allows you to daisy-chain even more than two PSUs into one. Lian-Li Dual Power Supply Adapter (availability is hard, not sure if still shipped) allows you to combine two PSUs into one. Both simple solutions for our problem. Of course you can do this yourself with soldering cables. But with these solutions and prices (Eur 20) I wouldn’t start tampering with electrical power myself.

When picking PSUs make sure to take PSU that allow for enough connectors. Preferably a PSU like the Corsair AX1200 that allows for connecting the cords yourself.

CPU, chipset and main memory don’t really make a difference

It is all about the GPU cards. Unless you want to do more on the box you are creating I wouldn’t spent too much Euros on top of the line CPUs, chipsets and memory. Any Intel socket 1366 or even socket 1155 is good enough. If you want to go AMD, socket  AM3 or AM3+  is good enough. Of course you can go to the newest sockets  but it doesn’t provide you with more cracking power. Same goes for MHz, it will not provide you with more cracking power. Memory should be enough to run your OS of choice and some more. Don’t be on the cheap side, no computer runs OK with insufficient RAM, but I still need to find the first cracking program that requires gigabytes of memory, except for rainbowtable (in that case system ram does matter a bit but you should calculate your needs based on the size of tables you are using).

Be smart and don’t pick top of the line here on CPU, socket and main memory. It will save you a considerable amount of money that you can than spend on GPU cards.

One of the commentators (Bitweasil, author of the Cryptohaze Multiforcer crack tooling so definitely somebody who has experience with this) recommended to match system RAM with RAM from GPU. With system RAM begin very cheap nowadays and most GPU cards shipping with about a Gig of RAM, you would probably match it by using a ‘default’ amount of 4-8GB. He also recommends to match the amount of CPU cores with the amount of GPU cards, just in case GPU drivers are not optimized as they should. I guess this makes sense, but also shouldn’t be a problem with most CPU’s nowadays being multi core.

PCIe1x is fast enough

This is an important one when choosing your main board. Many boards are advertised with X amount of PCIe16x slots. But when you look closer in the specs you notice that the 16x speed is shared between slots. So when for example slot 1 and slot 3 are used simultaneously, they are downgraded to both PCIe8x or even lower. If you think “more is better” this really makes it hard to pick a main board with as many as possible PCI16x slots. I’ve got news for you, main boards with 8 slots of true PCIe16x are limited to non existing. But there is also is no need for. If you go gaming (still the largest market for creators of main board with many PCI slots) you want to go SLI with some PCIe16x. In that case the cards mostly communicate via the SLI bridge and not via the PCI bus. But we go password cracking, not gaming. And with password cracking PCIe1x is fast enough.

PCIe works with lanes. The amount of lanes is a factor of two between 1 and 32 and is represented by the number directly after the “PCIe”. PCIe16x means 16 lanes. 16x Seems to be top of the line on most boards. PCI version 2 (which is the most used version for GPU cards and main boards right now) has a speed of “500MB per second per lane”. Now, with games textures and vertices are continuously processed by the cards. These are heavy calculations on big sets of input data that together require significant throughput on the PCIe bus. But with password cracking we are talking simple operations on data. Transfer of ‘data’ as in the list of base input words that are to be hashed + ‘operation’ as the set of calculations to be performed by the cores to calculate the hash on the GPU are transferred over the bus only periodically. No way that the GPU can calculate hashes so fast it requires 500MB of data + operations every second. GPU’s are simply not powerful enough at this moments to achieve 500MB/s.

So, PCIe1x is speedy enough. Suddenly a lot more main boards become available :-)

Memory on GPU card is not a delimiting factor

This continues on the discussion that the throughput of the GPU cards isn’t that big for password cracking compared to gaming. Using a gigabyte of memory on the card is a ridiculous huge amount that no tool will use. Perhaps only when you are using ridiculously large dictionary files. But if you are using dictionaries that are approaching 1 gigabyte you might need to verify the usefulness of the dictionary. Brute force will be faster.

So, save yourself some money and don’t go for the GPU cards with a ridiculous amount of memory. It will not improve your cracking speed. And with most cards shipping nowadays with 1-1.5GB of RAM, my pick would be those, and not the extra expensive with 2GB.

PCI riser cards can come in handy

With also the PCIex1 slots being usable for cracking, the only thing you need to overcome to use all PCIe slots on a main board is the fact that most GPU cards require the physical space of two PCIe slots. Flexible PCI riser cards come in handy here.  If you can find a way to lift the cards and have a big enough box to fit all these double sized GPU cards, you can then interconnect them with the main board via (flexible) PCI riser cards. Many solutions exist. Note that in theory all you need is a PCIe1x connection (the shortest possible connector). Just make sure the card you buy allows for it without sawing holes in the PCI connector (and if you do want to saw in your PCIe equipment here is an excellent tutorial: http://blog.zorinaq.com/?e=42) .

AMD has the more powerful architecture

When buying GPU cards for password cracking you have two different vendors to choose from: NVIDIA and AMD. Which one to pick? Short answer: go AMD, the results are all over the place.

Long answer: go AMD because they have an architectural preference of more cores/ALUs, resulting in more parallel calculations. AMD has more cores at a bit lower speed, where NVIDIA goes less cores but higher speed. For gaming there is not much between them. But AMD’s solution comes in handy for the task of password cracking.  You can read up all kinds of things like AMD’s move from the VLIW to the CGN architecture, NVIDIA’s current FERMI architecture that the Geforce500 architecture is based on, the move to the 28nm process AMD already made and NVIDIA will do with the to be released Geforce600 architecture, but the bottom line is that AMD’s approach is faster for password cracking.

The battle isn’t won, both NVIDIA and AMD have the same goal: continue awesome graphics performance but also enlarge the use for General Purpose computing on GPU. So perhaps NVIDIA’s next move will change things, but for now go AMD.

Pick the AMD HD79XX series

AMD recently released the HD79xx series. My pick right now would be the HD7970 card. It’s performance is top of the line, and the pricing is not ridiculous (check stats at https://en.bitcoin.it/wiki/Mining_hardware_comparison). You can go one series below and go HD6970 or HD6990 (basically 2 HD6970s on one board). But only go that way if you find a nice discount.

In the next few weeks AMD will release the HD7990, which basically will be 2 HD7970s on one board. They did the same trick with the HD6990, and if something teaches us from that release is that availability will be very hard. If you buy one card that may not be a problem, but buy 4 of those at once and you may have an issue. Do note that AMD has an issue where no more than 8 cards are recognized by the system. So when going HD6990 or future HD7990 you can only hold 4 of them (as these cards are double GPUs on one card).  I’m sure NVIDIA has similar issues I just don’t know the exact limit at this moment (it used to be limited to four cards about two years ago).

Linux support for AMD sucks, expect issue or wait for new software versions

AMD has shown not to take Linux as seriously as Windows. The catalyst drivers for Linux are a mess, although they are getting better and better. NVIDIA has been in the same spot a few years back, and they have fixed it. AMD will also fix this, but it will take some time. Right now you can expect that the current release (12.1) has issue detecting the latest HD7970 card. Simply wait for a newer version of go Windows if you want to use this card.

So, with these main topics discussed, let’s dive into the four different options you have. Of course your budget is the main decider for what way you want to go. More budget pays for more power. I’ll start with the cheapest one.

Option 1: add new cards to your existing GPU cracker

Budget estimate: a few hundred Euro

If you already have a GPU box you can simply add or swap cards. As stated above the CPU, memory and chipset will not hold you back. Simply add an HD7970 to your box. Or if you already went NVIDIA find yourself a nice GTX590 or a discounted GTX570.

My experience with combining AMD and NVIDIA cards in one box are pretty bad. You can expect issues at the driver level (does combining NVIDIA and AMD drivers sound like a good idea to you?) and with the password cracking tooling (you are pushing limits and may encounter bugs the creators never looked for).  Good luck with that.

Note that Bitweasil notes that he has success with mixing AMD and NVIDIA on Linux (see his tips in the comments). I have not tried it, but give the driver model of Linux I would not be surprised if it does work. My experience with mixing cards is on Windows 7, which has been far from trouble free.

Option 2: building a new tower model GPU cracker from scratch

Budget estimate: base system 1000 Euro + Euros for a maximum of 4 double sided GPU cards to add

If you don’t already have a GPU box you can simply build your own. The option explained here covers hardware needed for a ‘simple’ tower model PC stacked with GPU cards to the max. Current of the shelve main boards allow for a maximum of 8 PCI cards, which leaves for a maximum of 4 double sided GPU cards.

As explained earlier you can go moderate on CPU, memory and chipset. Challenges here are to find main boards with as much as possible PCI slots but also the right tower model cases to have room for all the GPU cards and PSU’s. Cooling may also be an issue, although any big case allows for plenty fans to be positioned.

Main board options

  • Gigabyte GA-X79-UD3: uses the latest Intel socket 2011, is advertised to handle 4-way-SLI (which in our case is important as it will handle 4 double width GPU cards) and is advertised in NL for around Eur190. Also, as it has 2 PCIe1x slots, if you start using PCI risers you can add even more cards.
  • Gigabyte GA-990FXA-UD7: for AMD cpu’s. Not newest socket but has 6 PCIe slots in 16x size, one in PCIe1x size and a traditional PCI slot. Supposed to handle 4-way-SLI and advertised around Eur190.
  • Gigabyte GA-X79-UD7: basically the same as the Gigabyte X79-UD3 but this one doesn’t have any traditional PCI slots. With Eur300 it’s more expensive and I would only pick this one if you would go with PCI riser cards to fully use the extra slots. Also this main board requires a XL-ATX case (discussed later on).
  • Gigabyte GA-X58A-UD9: uses an older Intel socket but comes with 7 PCIe slots, all in PCIx16 size, but not 16x speed. Can handle 4-way-SLI Advertised around Eur400, but not sure if it is shipped anymore. It needs a XL-ATX case. I would only pick this one if you go PCI riser and choose GPU cards that don’t support a PCIe1x connector.
  • EVGA 270-WS-W555-A2: supports Intel 1366 socket (if you want to go Intel Xeon), has 7 PCIe slots and can cope 4-WAY-SLI. Advertised around $600, which I find expensive. But some prefer the ‘professional’ approach EVGA has on the main boards. Main reason for this one is the brand and if you want to use dual Xeon CPUs. For all cards to be filled you need a case that can hold 9 PCI cards. See below for a list.
  • MSI Big Bang MARSHAL B3: a bit older Intel socket (1155), but has 8 PCIe slots available, all in full size, reasonably priced at Eur340. However, can’t find it at many web shops so availability may be an issue.
  • MSI 890FXA-GD70: recommended by Bitweasil as he has good experience with it. Takes AMD cpu’s and takes 4 double sized GPU cards. I couldn’t find it anymore in NL web shops, but the last price it was know to go for was Eur180, which is pretty good.

Cases options

Main challenge with the case is size. Although not a real standard XL-ATX, Ultra ATX and HPTX are terms to look for. Some of the cases I found:

Cooling

Make sure to spend some effort on cooling. With that many GPU cards and PSUs you will need it. Any big case you buy allows for fans to be added. Make sure to use these.

Water cooling can be an option, but to be honest I don’t have experience with it so can’t advice you on it. I also haven’t looked at the options as our GPU cracking machines are positioned in an air controller lab.

Option 3: building your own scalable supercomputer on a budget

Budget estimate: base system 1000 Euro + Euros for as many GPU cards as you can fit

We will be using the same components here as we did with option 2, except for the case. Budget for the main computer is about the same. But as you can stack more GPU cards you can spend your bigger budget before going to a second box.

Main issue with the previous option is that you will not be using all PCIe slots. With double sided GPU cards you need PCI riser cables to use all slots, and no case allows for 8 double sided GPU cards to be fitted away from the main board. So, what if we go without case? The guys at HighSpeed PC have a product called Top Desk Tech Station. It’s as simple as a case can be.

Now, with the advertised options you have the same space as a normal XL-ATX case. However, they also build custom design. I’ve been in contact with them for an extended version of their HPTX version. It’s fairly easy for them to adjust the design so you can lift the GPU cards and stack 8 double sided cards. I’ve seen the not yet released design and it simply rocks as it has a third level for your cards that use the PCI-risers. You can go even further and use PCI splitters to combine several cards on one PCIe slot (do note AMD’s maximum of 8 cards recognized). The Top Desk Tech Station XL-ATX goes for Eur180.

Pricing for the custom built (which will become a new product as they receive more and more demand) is not detailed at this moment. But the price they were offering me the custom built for is only just a tad more expensive and still is very reasonable.

Now for connecting the cards to the main board you need flexible PCI riser cables. These come in 16x size and in 1x size. Price around 10 to 30 Euro per cable.

Cooling does become an issue, so make sure you attach enough fans to your system. In my situation where the box is in an air conditioned environment these cooling issues are non existing.

If you are worried about warranty find yourself a local computer dealer that will built this system for you and sell it as one. That way they can handle any warranty issues if you encounter them.

Option 4: buying a pre built super GPU computer

Budget estimate: Eur13.000 excluding tax and shipping

The final option you have is to go professional and buy a solution from the guys at Renderstream. My pick would be the VDACTr8-A model. It can hold 8 double size GPU cards. The Renderstream solution is based on the TYAN FT77B7015 barebone with a custom built S7015 main board that has the PCIe slots positioned so it takes 8 double sized GPU cards.

Perhaps you can purchase these components yourself and save some money. I did look into this but had a really hard time finding shops where you can buy the TYAN FT72B7015 and the main board. Eventually I gave up. Also buying the entire solution from one vendor has much added benefit in terms of warranty and service. Be sure to ask them yourselves for a quote, but think about Eur13.000 for the basic VDACTr8-A model with 8x HD7970. That is excluding tax and shipping. Positive note for us Europeans, second half of 2012 they will be opening their warehouse/shipping center in Europe.

update: added a few remarks from Bitweasil’s comments below to be inline with the text. Also added more details on the custom built from the guys at Top Desk.