I was reminded of this tip during the CTF at a recent DC207 meetup. This config change is needed on machines with modern versions of OpenSSL that have disabled the older ciphers. The issue is that the old TLS, SSL and associated cipher suites have become insecure and support is subsequently dropped in OpenSSL.
For a workaround to this, you can edit the following lines at the bottom of /etc/ssl/openssl.cnf
The course costs at minimum $800 USD and includes 30 days of lab access and one OSCP exam attempt. There are packages that include longer lab access and you can extend your lab access if you find you need longer to prepare.
What ISN’T the OSCP
Current methods and techniques
It won’t make you a l33t hax0r, but you will learn fundamentals
How long did you study?
I started working on it on Sept 2018, then life and the holidays got in the way of dedicated study time. I kept slowly and intermittently practicing until April 2019 when I REALLY started to get serious about completing the OSCP. This started crunch time. I am lucky that my partner was on board with me locking my self away to focus on labbing. I took the exam on May 9th 2019.
How did you do to study?
I started by going through both the Offensive Security’s Penetration Testing with Kali Linux (PwK) workbook and then watching the associated videos. They are both fantastic resources providing a solid base of knowledge you need for the exam. I had the printed out the PwK workbook printed out and bound to save my eyes from staring at a screen. Through all my studies, I took a lot of notes. I used these notes when working on machines in the lab, exam, and other CTF style boxes I worked. Below are copies of the notes I created while studying.
Once I completed the workbook and videos, it was time to sit down and start to work on machines in the Lab. While working on the labs I began to branch out and gather and learn from various sources across the internet. As I worked through the lab and got closer to my date, I started to focus on my weak topics for me that were Windows Exploitation and Windows Privilege Escalation. I have added some of the main links and books I used to study, there are many more links in my notes.
Penetration Testing: A Hands-On Introduction to Hacking – Georgia Weidman
The Hacker Playbook: Practical Guide To Penetration Testing – Peter Kim
The Hacker Playbook 2: Practical Guide To Penetration Testing – Peter Kim
Hacking: The Art of Exploitation – Jon Erickson
OMG the Exam…
The OSCP exam is a practical test that is 24 hours of hacking in a mock environment attempting to break into various targets. You will then have another 24 hours to write a report based on your findings from the exam. To obtain your OSCP you must submit a report I’ll talk more about the report later. The Exam is proctored, you will run software that will capture your screen and webcam, both of which will also be monitored by one or more proctors. There are limits to the tools you can during the Exam:
Spoofing (IP, ARP, DNS, NBNS, etc) Commercial tools or services (Metasploit Pro, Burp Pro, etc.) Automatic exploitation tools (e.g. db_autopwn, browser_autopwn, SQLmap, SQLninja etc.) Mass vulnerability scanners (e.g. Nessus, NeXpose, OpenVAS, Canvas, Core Impact, SAINT, etc.) Features in other tools that utilize either forbidden or restricted exam limitations You are limited to use Metasploit once during the lab
These limitations are an example of why it is important to fully read through the exam guide and reporting template to make sure you have all the proofs and meet the reporting requirements. These guides are found at the following links:
When planning for my Exam I created a high-level schedule to follow. This is an important and way for me to get organized. My exam started at 9:00 am allowing me to follow a similar routine to what I do normally.
Wake up … Breakfast
Connect to Proctor and follow preocess – 15 mins before start
Receive access details and connect to VPN – 15 mins
Read requirements and write down in notes – 30 – 45 mins
Initial Enumeration of targets – 1 hour
Hack Away!
Eat Lunch
Hack…
Eat Dinner
Probably still Hack…..
Exam Tips and Tactics
This is a list of various mostly non-technical tips I have for when taking the Exam. When reading through people’s challenges on Reddit, Twitter and Blog posts I saw a lot of people ran into less than technical issues when taking their Exams.
I’ll repeat this here make sure you read through the exam guide and reporting template to make sure you have all the proofs and meet the reporting requirements!
Attempt to limit distractions and find ways to go into flow
Manage your Time Management wisely
I used Pomodoro to help divide up my day. This method is ~25 minutes working, take a 5-minute break, repeat. I changed targets on each cycle if I was not making progress and was just grinding away on a machine. This method helped me getting stuck on one machine for extended periods of time.
Keep a timeline of the day
This will help you reference and screenshots or recordings you created later.
You are your own worst enemy: Avoid going down a rabbit hole
Breath…go for a walk…pet a cat…Have a snack…
Enumerate Enumerate Enumerate
If you are not finding your way into a system or the way to escalate privilege, enumerate more.
Screenshot, Screen record, track everything! This will take the stress off of creating the report the next day.
Reporting
There are two topics when it comes to reporting there is the Lab report and the Exam report. Offensive Security provides a guide for reporting at the following URL: https://support.offensive-security.com/pwk-reporting/. This contains some templates and some recommendations on how to manage data.
One of the first questions people ask is if I did the Lab report. I decided not to do the Lab report, it only worth 5 points, and I did not find that the time to create the report was worth it for me. However, I did write a mock report to practice ahead of the Exam. The made sure that my first Exam reporting experience was not during the Exam when I would be exhausted.
When it comes to my Exam report, I started my report after I had finished my Exam but has not closed out with my proctor and start to create a very very very rough document with the screenshots and other content. I did this to make sure I had satisfied all of the requirements and it would let me go back and recreate or regather any Proofs I may have missed. After I thought I had everything and the adrenalin had started to wear off I went to sleep and got started the next day and finished the document throughout the next day.
In Closing
The OSCP was a great experience and very challenging
There is a lot to learn
Make sure significant people in your life understand the time commitment
This is the first part of this adventure the next part will be exploring the firmware of the device. With that let’s take a look at the hardware.
Teardown Time
After the device showed up, I quickly got down to taking the device apart. In my haste, I didn’t take many good photos of it intact. The front side of the board is straight forward; it contains the screen, button array for all user input, and a lot of useful test points. The front side is pictured below.
Board Front
The most significant information found on the front side of the board is the notation PG-103, which is also found in the firmware (spoiler). After some searching, I found this device is also branded as the PGST PG-103. This kind of rebranding of hardware is not unusual for a lot of Chinese devices.
Now switching to the back of the board, which is the business side of the board with the main chips and modules providing the various communication methods. When opening that device I encountered the intrusion detection button. This button causes the device to go into an alarm mode and require a reset of the device to come back online. For my testing, I bypassed this button bridging both sides of it.
Board Back
Component List
When inspecting the board, I found a few significant components and modules on the board. I was not surprised to see that most of the major communication parts are off the shelf modules. The components listed below are highlighted in the image above and the relevant data sheets where available are linked.
The main processor is a GigaDevice GD32 chip which is a series that is very similar to the of STMicroelectronics STM32 chips. The GD32F105 chip uses an ARM-based instruction set and has the same pinout as the STM32F105 component.
Block Diagram
The high-level block diagram for the device is pretty straight forward. The GD32F105 chip is the primary processing and control of the external communication modules. This allows for a modular architecture all of the peripherals.
When exploring the board there are many test points on the board and tracing them out I was able to trace out most of the pins to where they connect on the controller.
SYN515R Pin 10 (DO) -> CPU PB9 (62)
Unknown -> CPU PA5
Unknown -> CPU PA6
Unknown -> CPU PA8
U7 SCL -> Unknown
U7 SDA -> Unknown
DAC_OUT -> CPU PA4 (20)
WIFI UART TX -> CPU PA2 (16)
WIFI UART RX -> CPU PA3 (17)
GSM UART TX -> CPU PA12 (45)
GSM UART RX -> CPU PA13 (46)
U1 (F117) Pin 6 -> CPU PB 8
Summary?
After investigating the hardware I was able to extract the firmware and start the reversing process. I will cover what I have found in future posts. For now, if you are interested in more higher resolution photos of the board I have posted them on my Flickr account.
OpenSky is a proprietary trunking radio that is designed to carry both voice and data traffic. the protocol is marketed as to be secure and private. Opensky operates on the 700, 800, and 900 MHz bands.
OpenSky was originally developed by M/A-Com as part of the Monarch wireless voice and data system for FedEx in the 90s. Later M/A Com was purchased by Tyco Electronics who was then purchased by Harris RF Communications. Harris has now merged with L3 Technologies to become L3Harris. This protocol has gone on a wild ride of Mergers and Acquisitions for this protocol hasn’t it? The original OpenSky protocol was upgrades in 2010 and named OpenSky2.
The integrated data capabilities in OpenSky allow for more features in one single base station than voice-only trunking systems. This integration has allowed for dispatchers to have location data for radios in the field and, the ability to send data to terminals in for example police car, and for users log into the handsets pulling down their profile with the various talk groups and other preferences.
OpenSky and OpenSky2 are TDMA based protocols they have been designed to operate using 25W micro repeaters. Opensky2 introduced support for the 900mhz band and a more narrow bandwidth. The table below lists them out.
OpenSky
OpenSky2
Number of Slots
4
2
Raw bit rate (bps)
19,200
9,600
Channel Width (khz)
25
12.5
Frequecy bands
700 / 800
700 / 800 / 900
Released
1999
2010
Signal Harbor describes there are 3 major components to the OpenSky signaling protocols:
FMP (Federal Express Mobile Protocol) – Providing Digital Voice
OCP (OpenSky Communication Protocol)
OTP (OpenSky Trunking Protocol)
These protocols are based off a modified CDPD (IS-732) similar to a an IS-54 (D-AMPS) network. I could not find a lot of exact details on lower level protocol operations other then Each radio assigned an IP address.
Digital voice is encoded using the Advanced Multi-Band Excitation (AMBE) speech encoding standard. This a proprietary standard that was developed by Digital Voice Systems, Inc. Interestingly this standard has been using in the Iridium Network and XM Satellite radio. There are more details on this standard here and here.
I found this protocol interesting because, like most technology, it’s a product of it’s time. In this case OpenSky comes from a time before the pervasive presence of 4G/LTE wireless. Today you can accomplish many of the same goals as a OpenSky system by utilizing the current carrier LTE networks.
Below is a list of resources I used when researching this protocol:
Back on May 18th, I attended the inaugural BsidesNH event. It was a fantastic one-day event. The day started pretty early for me driving down from Maine arriving at Southern NH University. I arrived to pick up the fantastic badge made out of an old 3.5″ disk. After grabbing some coffee and a snack I settled into the auditorium and for a day of great talks. There were a few that stood out to me from the day that I will talk about.
The second talk of the day was Ghost in the Shell: When AppSec Goes Wrong by Tony Martin. Tony first talked about covered some basics of web application security. He framed these issues around the research he has done into various NAS devices and vulnerabilities he has discovered. Including the ability to create shadow users that have administrative access to devices but are not visible through the administrative interfaces of the device.
After lunch was Chinese and Russian Hacking Communities presented by Winnona DeSombre and Dan Byrnes, Intelligence Analyst from Recorded Future. They covered operations and cultures of Chinese and Russian underground groups. This was a very entertaining presentation and a summary of the information contained in the report: Thieves and Geeks: Russian and Chinese Hacking Communities.
The second to last talk of the day was Hunting for Lateral Movement: Offense, Defense, and Corgis presented by Ryan Nolette. He covered the ways attackers move around and infiltrate further into a network…Corgies. A great quote that stuck with me from his talk was: “If you teach an analyst how to think they will punch above their weight.” I feel this quote not only applies to security analysts but all levels of IT professionals.
BsidesNH was a well run and enjoyable event and a great addition to the Security events in New England. Thanks to all of the organizers and sponsors. I look forward to attending next year!
During my OSCP studies, I realized I needed a more efficient system for cracking password hashes. The screaming CPU fans and high CPU usage became a problem. I first tried using hashcat and the GPU on my MacBook Pro in OS X. There are some bugs and problems with hashcat on OS X that would make it crash in the middle of cracking a hash. Also, I was not interested in investing a server with a bunch of GPUs, the high costs to do this would outweigh the amount of time I need the system. All of this lead me to do a little research and found the instructions in the following link to build an AWS instance for password cracking.
Since that post was created there have been some changes to the offerings in AWS EC2 leading me write this post.
If you wish to skip ahead I have created scripts to automate the processes in the rest of this post. They are both in my github and can be downloaded at the following links.
For the rest of the article I will cover some of the instance options in EC2, installation of the needed Linux packages, the basic setup of Hashcat, running Hashcat, and finally monitoring and benchmarks of an EC2 instance.
AWS EC2 Options
There are many options for EC2 instances, they have a huge range in cost and scale.
I found the g3 instances to be the more cost effective tier. For my testing I opted to use the g3.4xlarge tier. Next to choose the AMI image, appropriate the appropriate operating system.
AMI images
There are two options that are I tested hashcat on they are both Ubuntu based. I’m sure there are many other available options that will work too, but I am familiar with Ubuntu systems. The first option is a standard Ubuntu image, there is nothing special about this image and it requires configuration to add the GPU drivers and a little more work.
Standard Ubuntu
The next option is a Deep Learning image, this image is preconfigured with the GPU drivers and was originally designed for machine learning applications. I found the the pre-configuration allowed for me skip a few steps in building out a new system.
Deep learning Ubuntu GPU driver preloaded
Instance Build and config
Once you have the instance deployed there are a few steps to get the Instance prepared for hashcat, the steps are a little bit different between a Standard and a Deep Learning Ubuntu instance.
An apt cronjob may already be running and you will have to wait it out.
Prepare Machine (Standard Ubuntu)
This script will install all the required packages and the Nvidia GPU drivers on a vanilla Ubuntu installation.
In comparison the previous script there is a much simpler script to prepare the Deep Learning instance. The main focus is installing the needed archive extraction tools.
Now we need to download and extract the star of the show Hashcat. The link in the wget below points to the the most recent version as of writing however you might want to check to see if there is a more recent version at the main site: https://hashcat.net/hashcat/
wget https://hashcat.net/files/hashcat-5.1.0.7z
7z x hashcat-5.1.0.7z
Download wordlists
You will need some wordlists for hashcat to use to crack passwords, he commands listed are for some wordlists I like to use when cracking. You should however add whichever lists are your favories.
mkdir ~/wordlists
git clone https://github.com/danielmiessler/SecLists.git ~/wordlists/seclists
wget -nH http://downloads.skullsecurity.org/passwords/rockyou.txt.bz2 -O ~/wordlists/rockyou.txt.bz2
cd ~/wordlists
bunzip2 ./rockyou.txt.bz2
cd ~
Running hashcat
Now it is time to run hashcat and crack some passwords. When running hashcat I had the best performance with the arguments-O -w 3. Below is an example command line I’ve used inclusing a rules file.
The nvidia-smi utility can be used to show the GPU processor usage and what processes are utilizing the GPU(s). The first example is is showing an idle GPU.
ubuntu@ip-172-31-17-6:~$ sudo nvidia-smi
Fri Apr 26 14:43:49 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.104 Driver Version: 410.104 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 Off | 00000000:00:1E.0 Off | 0 |
| N/A 37C P0 42W / 150W | 0MiB / 7618MiB | 97% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
This example shows a GPU being used by hashcat.
ubuntu@ip-172-31-17-6:~$ sudo nvidia-smi
Fri Apr 26 14:44:44 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.104 Driver Version: 410.104 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 Off | 00000000:00:1E.0 Off | 0 |
| N/A 46C P0 141W / 150W | 828MiB / 7618MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 11739 C ./hashcat-5.1.0/hashcat64.bin 817MiB |
+-----------------------------------------------------------------------------+
Conclusion and Benchmarks
Finally here is a benchmark I ran on a g3.4xlarge instance. This instance type contains 1 GPU. These results give an idea of performance for this AWS EC2 instance type.
I needed to set up and Meraki API key to test, well an Meraki API that was in beta. This is the process I used to get started with some of the basics of the Meraki API and getting a test environment up and running. There are lots of great references covering the basics of REST APIs like the REST API Tutorial. These resources will do a much better job then I can of explaining REST APIs. I found there was a lack of guide for the initial steps of building the data you need to get started with the Meraki API.
Note: The screenshots are from late 2018 and may have changed over time.
API Key Generation
First things first you will need to login to the Meraki Dashboard. Once there, you will navigate using the menu on the left to Organization -> Settings.
On the settings screen scroll down to Dashboard API Access, and check “Enable access to the Meraki Dashboard API” and click Save at the bottom. Once the general access is enabled you will need to click the “profile” link to go to the screen where you generate an API Key to use than making REST API calls.
On the API access click the “Generate new API Key” button. If the button is not there I found with my account I can only have a maximum of two API keys generated at any point in time. Once I deleted one key the button came back.
After clicking the button a dialogue similar to this showing you your new key, this key is only shown once so make a note of it since you will use it to authenticate your API calls.
Now that you have a key what to do with it?
Meraki has an extensive API with many calls and you will want a tool to start to test some of the calls. A good utility to start testing with is Postman. This tool allows you to make REST API calls using a convenient GUI. I won’t go into complete detail on how to use Postman but cover some highlights to getting it setup to test some Meraki API calls.
A useful feature of Postman is the ability to import collections of API calls. The collection of Meraki Dashboard calls is at https://create.meraki.io/postman. Once there click “Run in Postman” in the upper right and it will ask to open the Postman client. Once you import the collection there will need to be some variables you will need to discover and fill in:
X-Cisco-Meraki-API-Key
organizationId
networkId
baseUrl
To set these variables you will need to edit the newly imported Postman collection, you can right click on the collection and select “Edit.”
Then select the Variables tab, I have populated these variables already in the screenshot, you will need to type them in.
Now you ask where do I find the values for these variables. I’ll cover the calls that are made to collect the values you need in the next few sections.
Meraki API URL (baseUrl) and API Key
baseURL The first variable you will set it the baseUrl this is the URL that Postman will use to send REST API calls to. In general for testing you can the use URL:
https://dashboard.meraki.com/api/v0
This will work for testing and non-production. Once you go to production you will want to point to the specific shard you are hosted on such as:
https://n466.meraki.com/api/v0
X-Cisco-Meraki-API-Key We will also need to set the API key which we generated earlier. This is stored in the X-Cisco-Meraki-API-Key variable. This variable sets the header also named X-Cisco-Meraki-API-Key in REST calls. This is used to authenticate the REST calls.
With these two variables set you can start to discover the organizationId and networkIds.
Finding the “organizationId”
To find the organizationId, in Postman navigate to “Organizations -> List organizations this user has access to” in the sidebar on the left.
The query in Postman looks like:
The full REST URL to retrieve the Meraki Organizations you have access to is: https://dashboard.meraki.com/api/v0/organizations The data returned shows the organizations you have access to. The “id” number is the field used to select the organization you wish to query
[ { "id": 1234567, "name": "Organization name" }
Finding the “networkIds”
Many calls I have worked will use either the organizationID ora networkId. In most organizations, there are multiple networks in the organization you are querying. Each of the networks is identified by the networkId.
Now there is a test environment to play and learn how the various API calls work and what data can be collected, set, or deleted. Postman is just the start for experimentation and to