Categories
Uncategorized

WebRTC with Pion

For a project I’ve been using Pion WebRTC which is a Golang implementation of the WebRTC API.

It is easy to set up and provides great performance.

Some tips when using Pion WebRTC, or WebRTC in general:

  • Use multiple TURN/STUN servers. I’ve been using coturn and Twilio’s WebRTC servers.
  • Use Trickle ICE to speed up the initial connection
  • When using h264, use the h264reader to send NALs

Streaming with Pion WebRTC

Once you’ve done all the peer/ICE handling, you can send video/audio. Below is the piece of code I’ve been using to send a h264 stream through WebRTC:

<-iceConnectedCtx.Done()
		h264FrameDuration := time.Millisecond * 33 
		ticker := time.NewTicker(h264FrameDuration)
		for ; true; <-ticker.C {
			nal, h264Err := h264.NextNAL()
			if h264Err == io.EOF {
				fmt.Printf("All video frames parsed and sent")
			}
			if h264Err != nil {
				panic(h264Err)
			}

			if h264Err = videoTrack.WriteSample(media.Sample{Data: nal.Data, Duration: time.Second}); h264Err != nil {
				panic(h264Err)
			}
		}

Similar for audio, you can send incoming Opus packets like this:

audioTrack, audioTrackErr := webrtc.NewTrackLocalStaticSample(webrtc.RTPCodecCapability{MimeType: webrtc.MimeTypeOpus}, "audio", "pion")

p := make([]byte, 960)

for {
	n, err := audioReader.Read(p)
	if err == io.EOF {
		break
	}
	if audioTrackErr = audioTrack.WriteSample(media.Sample{Data: p[:n], Duration: time.Millisecond * 20}); audioTrackErr != nil {
		panic(audioTrackErr)
	}
}

Signaling and WebRTC

A great way to do signaling is through Websockets. I can highly recommend using Gorilla WebSocket which provides a clean API to handle Websockets with Go.

Upon receiving an ICE candidate through a Websocket message, you can pass the candidate to Pion through a Go channel. The answer can be sent back, by sending a Websocket message back to the sender.

Categories
Uncategorized

macOS Virtualization.Framework

The macOS Virtualization.Framework allows you to run up to 2 macOS VMs (Virtual Machines) on Apple hardware.

The limit of 2 VMs per machine is due to Apple’s EULA, explicitly setting a maximum of 2 copies of macOS per Apple machine.

The framework, which runs on Apple Silicon, comes with paravirtualized graphics which means using Metal on the VMs works pretty well.

To get started, you can either implement the framework yourself, or use one of the open-source projects: Tart or VirtualBuddy.

You’ll need an ipsw file of your macOS of choice (macOS Monterey or higher), after which you can create a brand new VM.

There are some limitations to using the framework:

  • No iCloud support (yet)
  • Bridge networks require a special entitlement (com.apple.vm.networking) from Apple
  • Random crashes, flaky performance with Xcode 14+ on macOS Ventura 13.4 and higher
  • No multiple resolution support

I’ve had some limited success with using Better Display’s old open-source project to add multiple resolution support to the virtualization.framework VM.

The advantages of using the framework is very fast boot times for macOS VMs, speedy graphics and ease of use.

Categories
Uncategorized

Using ditto on macOS

Recently I was investigating why this command returned a not authorised response:

spctl --verbose --assess --type execute --v ${fileName}.app

Turns out because I zipped the .app file like this:

zip -r zipped-file ${fileName}.app

This causes spctl to no longer find the correct notarized details.

Using ditto is a better solution:

ditto -c -k --sequesterRsrc --keepParent ${fileName}.app zipped-file.zip

Oh and by the way, do not use jar xf to unzip the file, instead use plain unzip or ditto if you don’t want spctl to complain.

Categories
Uncategorized

TCP MSS clamping with iptables for IPSec tunnel

When routing traffic through a (IPSec) tunnel, an endpoint might need to do mss clamping if you are experiencing MTU issues.

For example, you are using a site-to-site VPN network, with a specific gateway as endpoint. When browsing websites through the tunnel, some websites might not load properly.

An example, using iptables to fix this problem:

iptables -A FORWARD -s 10.1.0.0/18 -o ens4 -p tcp -m tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1360

This will set the mss to 1360 for traffic coming from 10.1.0.0/18 on interface ens4.

The 1360 value depends on the situation, 1360 bytes is the overhead created by IPsec encapsulation

Categories
Uncategorized

Multiple default gateways on Linux

Suppose you have a Linux machine doing IP forwarding (net.ipv4.ip_forward=1).

Depending on the incoming traffic, you might want to forward the packets to different gateways.

With just one gateway, you can simply add (or replace) the default gateway:
ip route add default via x.x.x.x

If you want to set a default gateway for a specific (incoming) IP range, you can add a custom routing table, using iproute2:

  • echo 200 custom >> /etc/iproute2/rt_tables
  • ip rule add from 10.1.2.0/24 table custom
  • ip route add default via y.y.y.y table custom
  • ip route flush cache

Categories
Uncategorized

Electron with custom Chromium build

I was looking into a way to customise the Chromium code in an Electron app. As it turns out, it’s not as difficult as it might sound, though it requires some patience (mainly because building Chromium takes a lot of time, RAM and CPU).

To get started, make sure you have installed depot_tools from Google.
It’s a good idea to provision a git cache as well:

$ export GIT_CACHE_PATH="${HOME}/.git_cache"
$ mkdir -p "${GIT_CACHE_PATH}"

Now, you can fork electron and add your Chromium patches.
It’s important to make sure you deal with whitespace and newlines as well. Electron has a couple of scripts that will generate the patch file for you.


Next, let’s configure the build:

$ mkdir electron && cd electron
$ gclient config --name "src/electron" --unmanaged https://github.com/[your-fork-name]/electron
$ gclient sync --with_branch_heads --with_tags

Once that completes successfully, you can indicate the build config you want to use. In our case, let’s use the release config:

$ gn gen out/Release --args="import(\"//electron/build/args/release.gn\") $GN_EXTRA_ARGS"

$ ninja -C out/Release electron

This will take a while to build, depending on your CPU, RAM and disk.

When ninja finally completes, you might want to build a package of Electron:

$ ninja -C out/Release electron:electron_dist_zip

You now have a zip file, which you can use with for example @electron-forge. Make sure to specify the correct config in your package.json:

"config": { "forge": { "packagerConfig": { "electronZipDir": "../custom-electron" } }

The zip files should be named similar to these:

  • electron-v15.1.2-darwin-x64.zip
  • electron-v15.1.2-win32-x64.zip

Now you can build your Electron app with the custom Chromium build.

Categories
Uncategorized

VMWare Fusion – modify DHCP

If you are running VMWare Fusion, chances are you might have created your own custom network adapter.

In case you’re running an (authoritative) DHCP server in this subnet, you might see interference with VMWare Fusion’s own DHCP server.

You can easily disable the Fusion DHCP server by following these steps (no Fusion restart required):

  • set DHCP no for your adapter with sudo nano /Library/Preferences/VMware\ Fusion/networking
  • apply the new settings with:

sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --configure
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start

You might also need to disable the macOS bootp process:
sudo /bin/launchctl unload -w /System/Library/LaunchDaemons/bootps.plist

Categories
Uncategorized

Automated Browser Testing with Puppeteer

If you are interested in browser automation, you probably have heard of Puppeteer.

Puppeteer is a NodeJS library, which connects with Chromium browsers through the DevTools protocol.

Puppeteer will send the same messages back and forth just like the Chrome DevTools do. By doing that, it allows Puppeteer to control and interact with the Chrome browser.

There’s some advantages to using this method instead of using Selenium (WebDriver):

  • It is faster, because of the DevTools protocol which is natively supported. And because it’s using WebSockets instead of HTTP requests (which WebDriver uses).
  • The default mode is headless, which means no UI is visible. If you are automating your browser, chances are you don’t really need to see the browser. If you are doing UI tests, you might want to see the browser, in which case Puppeteer has a ‘headful‘ mode as well.
  • Regular updates. Puppeteer is maintained by Google. This means it will definitely keep up with Chrome and any new features.

Ready to get started? I can recommend reading the article Puppeteer Testing which will guide you through setting up and configuring Puppeteer and a test framework such as Jest, WebDriverIO or PyTest.

In case you’re looking for an alternative solution, I can recommend Playwright. It offers the same set of features, uses the same technology under the hood and has broader browser support.

Happy Testing!

Categories
Uncategorized

Removing ‘System Volume Information’ from a NTFS Volume

There’s a quick and easy way to remove the ‘System Volume Information’ folder from a NTFS disk. Run these commands in an elevated shell:

D: (or whichever volume letter you are using)
takeown /r /f "System Volume Information"
rd /s /q "System Volume Information"
Categories
Uncategorized

Streaming MySQL backup

This week I needed to backup a Percona MySQL server.
One solution for this, is to stop the MySQL server, create a mysqldump, and transfer it to your backup location.

However, depending on your tables and data size, this might not be the best solution. Especially if the database you want to backup is a live database with active users.

The solution for me was to use xtrabackup (innobackupex) from Percona to stream the database in tar format over SSH to another server:

innobackupex --stream=tar . | ssh user@x.x.x.x "cat - > /mnt/backup/backup.tar"

Once this is done, the other server needs to unpack the tar and prepare the backup:

xtrabackup --prepare --target-dir=/var/lib/mysql

At the end of this command, you should see an OK message.
If all went well, you can now do:

chown -R mysql:mysql /var/lib/mysql

and restart the MySQL server. The binlog position will be included in the output of the xtrabackup --prepare so you can easily set up master/slave syncing.

Finally, I created a cronjob on the MySQL Slave server which will take a daily backup with xtrabackup and upload to a 3rd party secure storage.