Friday, September 11, 2020

Finding a console Word Processor

I prefer to use terminal applications when possible, I'm just more comfortable in an XTerm. I'm not a purist about it, if the graphical application is better, then I'll use that instead. 

I recently looked around to see if there are any word processors that would work in a terminal. After trying all the available options, I decided that WordPerfect 6.2 for DOS from 1993 was exactly what I was looking for. This is probably a surprising choice, I'm not nostalgic for DOS software or a long-time holdout, I'm a new user in 2020.

It's remarkably powerful, I can use it over ssh, in tmux/screen, and there's still a community of knowledgeable users who will answer questions about advanced usage. 

I mentioned this in a discussion recently, and had a few questions from people who wanted to try it out, so here are my notes on how I set it up.

Before we begin, I know from experience that many people reading this will be shouting that I should be using a text editor. I can assure you, a text editor is not a word processor. If you're not convinced, please keep reading, I'll explain why below.

Configuring Dosemu

First, install dosemu2. Dosemu supports a terminal mode that translates VGA text mode operations into terminal operations using libslang. This makes software like WordPerfect look and act like a native ncurses application.

I used dosemu2 instead of dosemu, it's an updated and actively maintained fork of dosemu that doesn't rely on Virtual 8086 mode. This means it works on x86-64.

Run dosemu -term, and verify you get a C:\> prompt. If everything looks okay, type exitemu to return to Linux.

Installing WordPerfect

Create an installation directory in your C:\ drive like this:

$ cd ~/.dosemu/drive_c

$ mkdir install

Then copy the installation files, and run the installer:

$ dosemu -t

C:\ > cd install/install/wp62


I selected these options:

  • Standard Installation
  • Use Smart Prompting
  • No additional Graphic Drivers
  • No Sound Drivers
  • Yes Printer Drivers
    • Select Passthru PostScript

  • Install all Conversion Drivers
  • No Fax Files
  • Leave Serial Number Blank


There is an official patch available here that fixes a few minor bugs in the release build of WordPerfect 6.2. If you plan on using "classic" WordPerfect keybindings, then I don't think it's necessary.

If you plan to use CUA keybindings (Ctrl+C to copy, Ctrl+V to paste, Shift+Up to select, and so on), I would recommend using it. This is because it fixes some of those bindings. If you're not an old school WordPerfect user, then you will almost certainly be more comfortable with CUA bindings.

I used wine to run it.

Configuring Dosemu

I made a wrapper script like this:


declare wpcmd='C:\COREL\WP62\WP.EXE /df /ds /ns /tx /sa'

if test $# -eq 1; then

    declare dir=$(dirname "${1}")

    declare file=$(basename "${1}")

    if cd "${dir}"; then

        wpcmd+=' G:\'




exec dosemu -term -E "${wpcmd}"

Call it wp and place it in your $PATH, now you can just run wp to start WordPerfect.

I also use this script to handle printer output, call it dosprint and also place it in your $PATH. It requires ghostscript to be installed.


declare tmpfile=$(mktemp --tmpdir=${HOME}/Downloads --suffix=.pdf printjob.XXX)

if ps2pdfwr - ${tmpfile} > /dev/null; then

    xdg-open "${tmpfile}"


    rm -f ${tmpfile}


Here is my ~/.dosemurc, if you want to use it as a template

# CPU shown to DOS, valid values:  "80[23456]86"

$_cpu = "80686"

$_cpu_emu = "vm86"

$_cpu_vm = "emulated"

$_cpu_vm_dpmi = "emulated"

$_ems = (32768)

# if possible use Pentium cycle counter for timing. Default: off

$_rdtsc = (on)

# 0 = all CPU power to DOSEMU; default = 1 = nicest, then higher:more CPU power

$_hogthreshold = (1)

# choose the time source for the RTC emulation.

$_timemode = "linux"

# Keyboard and mouse

$_layout = "uk"

$_rawkeyboard = "on"

$_mouse_internal = (off)

# Printer and parallel port settings

$_lpt1 = "dosprint"

# idle time in seconds before spooling out. Default: (20)

$_printer_timeout = (5)

# speaker: default: "emulated", or "native" (console only) or "" (off)

$_speaker = ""

# sound support

$_sound = (off)

# built-in Packet Driver. Default: on

$_pktdriver = (off)

Configuring WordPerfect

If you're a new WordPerfect user like me, you will probably prefer to use CUA bindings. 

  • Shift-F1 -> Keyboard Layout
    • Copy CUAWP52, call DOSEMU
    • Edit DOSEMU
    • If you installed the patch, Delete all the Pos/Sel Macros, they're not necessary.
    • Remap Ctrl+L to ItalicsKey Command (Ctrl+I is indistinguishable from Tab in a terminal, so you can't use that)
    • Select DOSEMU

There is mouse support in dosemu, but I don't use it, you can enable it if you prefer.

Word Processing vs Text Editing

The reason I want a word processor is to format and layout documents for printing. I believe this task is not suitable for a text editor.

Consider the following problem, I have a page of text I want to print in a proportional font (say, Helvetica 12pt). I need the document to fit on one page, and I'll edit it until it fits.

How would you do this with a text editor? Helvetica is a proportional font, which means calculating how each individual character will fit on the page is non-trivial. You'll have no feedback where lines wrap or the page breaks as you edit.

This is really easy in a word processor, it understands font geometry and physical page dimensions and gives you immediate realtime feedback as you type about where lines will wrap and pages break.


I know someone will ask about security! 😂

Yes, I work in security, and yes I'm using software that hasn't been updated in 30 years.

I only use it to create new documents, but If you do need to open untrusted documents, then please don't use unsupported software! DOSEMU2 does make an effort to safely contain safely software, and can be configured to limit access to the host.

Further Reading

If you're interested in reading more about WordPerfect DOS, there's a comprehensive site dedicated to it here, and an active community who answer questions about it here.

Wednesday, July 29, 2020

You don’t need SMS-2FA.

I believe that SMS 2FA is wholly ineffective, and advocating for it is harmful. This post will respond to the three main arguments SMS proponents make, and propose a simpler, cheaper, more accessible and more effective solution that works today.

Just like yesterday's topic of reproducible builds, discussions about SMS-2FA get heated very quickly. I've found that SMS-2FA deployment or advocacy has been a major professional project for some people, and they take questioning it's efficacy personally.

Here are the main arguments I’ve heard for SMS 2FA:

  • SMS 2FA can prevent phishing.
  • SMS 2FA can’t prevent phishing, but it can prevent “credential stuffing”.
  • We have data proving that SMS 2FA is effective.

I’ll cover some other weaker arguments I’ve heard too, but these are the important ones.

Does SMS 2FA Prevent Phishing?

I assume anyone interested in this topic already knows how phishing works, so I’ll spare you the introduction. If a phishing attack successfully collects a victim's credentials, then the user must have incorrectly concluded that the site they’re using is authentic.

The problem with using SMS-2FA to mitigate this problem is that there’s no reason to think that after entering their credentials, they would not also enter any OTP.

I’ve found that lots of people find this attack difficult to visualize, even security engineers. Let’s look at a demonstration video of a penetration testing tool for phishing SMS-2FA codes to see the attack in action.

There are a few key details to notice in this video.

  1. The SMS received is authentic. It cannot be filtered, blocked or identified as part of a phishing attempt.
  2. Notice the attackers console (around 1:05 in the video). For this demonstration it only contains a single session, but could store unlimited sessions. The attacker does not have to be present during the phishing.
  3. Installing and using this software is no more complicated than installing and using a phishing kit that doesn’t support SMS-2FA.
  4. An attacker does not need to intercept or modify the SMS, in particular no “links” are added to the SMS (this is a common misconception, even from security engineers).
  5. The phishing site is a pixel perfect duplicate of the original.

I think a reasonable minimum bar for any mitigation to be considered a “solution” to an attack, is that a different attack is required. As SMS-2FA can be defeated with phishing, it simply doesn’t meet that bar.

To reiterate, SMS 2FA can be phished, and therefore is not a solution to phishing.

Does SMS 2FA Prevent “Credential Stuffing”?

Credential stuffing is when the usernames and passwords collected from one compromised site are replayed to another site. This is such a cheap and effective attack that it’s a significant source of compromise.

Credential stuffing works because password reuse is astonishingly common. It’s important to emphasise that if you don’t reuse passwords, you are literally immune to credential stuffing. The argument for SMS-2FA is that credential stuffing can no longer be automated. If that were true, SMS-2FA would qualify as a solution to credential stuffing, as an attacker would need to use a new attack, such as phishing, to obtain the OTP.

Unfortunately, it doesn’t work like that. When a service enables SMS-2FA, an attacker can simply move to a different service. This means that a new attack isn’t necessary, just a new service. The problem is not solved or even mitigated, the user is still compromised and the problem is simply shifted around.

Doesn’t the data show that SMS 2FA Works?

Vendors often report reductions in phishing and credential stuffing attacks after implementing SMS-2FA. Proponents point out that whether SMS-2FA works in theory or not is irrelevant, we can measure and see that it works in practice.

This result can be explained with simple economics.

The opportunistic attackers that use mass phishing campaigns don’t care who they compromise, their goal is to extract a small amount of value from a large number of compromised accounts.

If the vendor implements SMS 2FA, the attacker is forced to upgrade their phishing tools and methodology to support SMS 2FA if they want to compromise those accounts. This is a one-off cost that might require purchasing a new phishing toolkit.

A rational phisher must now calculate if adding support for SMS 2FA will increase their victim yield enough to justify making this investment.

If only 1% of accounts enable SMS 2FA, then we can reasonably assume supporting SMS-2FA will increase victim yield by 1%. Will the revenue from a 1% higher victim yield allow the phisher to recoup their investment costs? Today, the adoption is still too low to justify that cost, and this explains why SMS 2FA enabled accounts are phished less often, it makes more sense to absorb the loss until penetration is higher.

For targeted (as opposed to opportunistic) phishing, it often does make economic sense to support SMS-2FA today, and we do see phishers implement support for SMS-2FA in their tools and processes.

Even if SMS 2FA is flawed, isn’t that still “raising the bar”?

It is true that, if universally adopted, SMS 2FA would force attackers to make a one-time investment to update their tools and process.

Everyone likes the idea of irritating phishers, they’re criminals who defraud and cheat innocent people. Regardless, we have to weigh the costs of creating that annoyance.

We have a finite pool of good will with which we can advocate for the implementation of new security technologies. If we spend all that good will on irritating attackers, then by the time we’re ready to actually implement a solution, developers are not going to be interested.

This is the basis for my argument that SMS-2FA is not only worthless, but harmful. We’re wasting what little good will we have left.

Are there better solutions than SMS 2FA?

Proponents are quick to respond that something must be done. 

Here’s the good news, we already have excellent solutions that actually work, are cheaper, simpler and more accessible.

If you’re a security conscious user...

You don’t need SMS-2FA.

You can use unique passwords, this makes you immune to credential stuffing and reduces the impact of phishing. If you use the password manager built in to modern browsers, it can effectively eliminate phishing as well.

If you use a third party password manager, you might not realize that modern browsers have password management built in with a beautiful UX. Frankly, it’s harder to not use it.

Even if you can’t use a password manager, it is totally acceptable to record your passwords in a paper notebook, spreadsheet, rolodex, or any other method you have available to record data. These are cheap, universally available and accessible.

This is great news: you can take matters into your own hands, with no help from anyone else you can protect yourself and your loved ones from credential stuffing.

Q. What if I install malware, can’t the malware steal my password database?

Yes, but SMS-2FA (and even U2F) also don’t protect against malware. For that, the best solution we have is Application Whitelisting. Therefore, this is not a good reason to use SMS-2FA.

If you’re a security conscious vendor...

You don’t need SMS-2FA.

You can eliminate credential stuffing attacks entirely with a cheap and effective solution.

You are currently allowing your users to choose their own password, and many of them are using the same password they use on other services. There is no other possible way your users are vulnerable to credential stuffing.

Instead, why not simply randomly generate a good password for them, and instruct them to write it down or save it in their web browser? If they lose it, they can use your existing password reset procedure.

This perfectly eliminates credential stuffing, but won’t eliminate phishing (but neither will SMS-2FA).

If you also want to eliminate phishing, you have two excellent options. You can either educate your users on how to use a password manager, or deploy U2F, FIDO2, WebAuthn, etc. This can be done with hardware tokens or a smartphone.

If neither of those two options appeal to you, that doesn’t mean you should deploy SMS-2FA, because SMS-2FA doesn't work.

Minor arguments in favor of SMS-2FA

  • SMS-2FA makes the login process slower, and that gives users more time to think about security.

[Note: I’m not making this up, proponents really make this argument, e.g. here, here and here]

This idea is patently absurd. However, If you genuinely believe this, you don’t need SMS-2FA. A simple protocol that will make login slower is to split the login process, first requesting the username and then the password.

When you receive the username, mint a signed and timestamped token and add it to a hidden form field. You can then pause before allowing the token to be submitted and requesting another token that must accompany the password.

This is far simpler than integrating SMS, as you can just modify the logic you are already using to protect against XSRF. If you are not already protecting against XSRF, my advice would be to fix that problem before implementing any dubious “slower is better” theories.

  • Attackers vary in ability, and some will not be able to upgrade their scripts.

If you can purchase and install one kit, it is pretty reasonable to assume that you are capable of purchasing and installing another. The primary barrier here is the cost of upgrading, not hacking ability.

When adoption is high enough that it’s possible to recoup those costs, phishers will certainly upgrade.

  • Don’t let the perfect be the enemy of the good.
  • Seat belts aren’t perfect either, do you argue we shouldn’t wear them?
  • Etc, etc.

This argument only works if what you’re defending is good. As I’ve already explained, SMS-2FA is not good.

Unique Passwords and U2F are not perfect, but they are good. Unique Passwords reduce the impact of phishing, but can’t eliminate it. U2F doesn’t prevent malware, but does prevent phishing.

  • A phishing kit that implements SMS-2FA support is more complex than one that doesn’t.

That’s true, but this complexity can be hidden from the phisher. I don’t know anything about audio processing, but I can still play MP3s. I simply purchased the software and hardware from someone who does understand those topics.

  • What about "SIM swapping" attacks?

SIM swapping attacks are a legitimate concern, but if that was the only problem with SMS-2FA, my opinion is that would not be enough to dismiss it.

  • It's not accurate to say "SMS-2FA doesn't prevent credential stuffing", because moving an attacker to other services is prevention.

I have an analogy I like to use when SMS proponents make this claim. If you cover a 1cm2 area of your chest with sunscreen, does it prevent sunburn? I think a reasonable person would say that it does not, and you will get sunburned.

Is this "preventing" sunburn?

If you enable SMS-2FA, then you are still compromised, and the problem has not been prevented. Therefore, I think a reasonable neutral person would agree that SMS-2FA does not prevent credential stuffing.

Tuesday, July 28, 2020

You don’t need reproducible builds.

I’m skeptical about build reproducibility, but ardent supporters are defending and cheering for it at every opportunity. After a few too many heated discussions, I’ve decided to write down my thoughts on the topic.

I’ll try my best to summarize the arguments for reproducible builds, and explain why I find them unconvincing.

Supporters like to pretend the topic is simple, as one reproducibility fan brusquely put it to me on twitter:

“Reproducibility is important. Source code A leading to binary B through a reproducible build guarantees what you see (source) is what you get (the binary from the vendor). What is not clear here?”

What isn’t clear is what benefit the reproducibility provides. The only way to verify that the untrusted binary is bit-for-bit identical to the binary that would be produced by building the source code, is to produce your own trusted binary first and then compare it. At that point you already have a trusted binary you can use, so what value did reproducible builds provide?

This diagram demonstrates how to get a trusted binary without reproducible builds.

The answer to this question is that reproducible builds are not for users, they are expected to nominate somebody they trust to build it for them, and verify that the output is correct.

The revised workflow is similar to the first diagram, but now a trusted vendor takes care of the compilation step, so the problem is the same. The trusted vendor has to produce the binary anyway, why not just use that one, making reproducible builds unnecessary?

The answer is that we can design a system where several third parties reproduce the binary, and we can require them all to agree that a binary matches. Here is a diagram of that workflow.

The problem with this scenario is that the user still has to trust the vendor to do the verification. If the trusted vendor is compromised, then they can provide tampered binaries. If they’re not compromised, then there was no benefit to reproducing it with third parties.

In effect, this is no different to how the system works today with Linux distributions.

The answer to this problem is that we can build a system where the user only has to trust the vendor once. If the vendor is compromised after that point, the reproducing builds will prevent them from distributing tampered packages to the user.

This is a little more complicated, the user can’t verify the builds reproduce by compiling them themselves, because then they already have a trusted build. The answer is for the user to nominate the vendors they trust, and then require a signature from them to install any packages.

Here is that workflow:

Now if the vendor is compromised or becomes malicious, they can’t give the user any compromised binaries without also providing the source code. This ignores some complexities, like ensuring security updates are delivered even if one vendor is compromised, what to do if the reproducers stop working, or how to reach consensus if the reproducers and your vendor disagree on what software or fork you should be using.

Regardless, even if we ignore these practicalities, the problem with this solution is that the vendor that was only trusted once still provides the source code for the system you’re using. They can still provide malicious source code to the builders for them to build and sign.

I don’t know what supporters suggest is the solution to this problem, perhaps that the vendor you trusted shouldn’t provide any patches, configuration or any of the system software. If operating system vendors can’t actually modify or configure the operating system, then frankly this doesn’t seem like a useful system.

Perhaps some people are convinced this system is still worthwhile and achievable, but it is clearly not a simple solution. For this reason, I think it is entirely reasonable to be skeptical about the benefits of reproducible builds, and the benefits are not as clear as supporters claim.

  • Q. It’s easier to audit source code than binaries, and this will make it harder for vendors to hide malicious code.

I don’t think this is true, because of “bugdoors”. A bugdoor is simply an intentional security vulnerability that the vendor can "exploit" when they want backdoor access.

The benefit of bugdoors to attackers is that they’re perfectly plausibly deniable. If someone catches you, you can simply claim it was a mistake, and there are zero consequences. You can then repeat this ad infinitum, it’s simply not unusual to fix “mistakes” continuously, and there is no way to determine intent.

If someone wants to provide a malicious program, reproducible builds can force them to also provide the source code, but it can’t force the program to be non-malicious, so this is not particularly useful. You already have to trust the source code.

You might claim that I have no data to support this, but that’s the benefit of bugdoors to attackers: There can never be data to prove your wrongdoing.

  • Q. It’s easier to tamper with binaries than to write a bugdoor, so reproducible builds do improve security.
I absolutely disagree, every programmer knows how to write a bug or short circuit some logic. Hiding malicious activity in a binary, with a multi billion dollar malware industry determined to find it is more difficult. In addition, once you’ve produced and signed the malicious backdoor, it is not repudiable - you can’t deny you wrote and provided it.

With bugdoors, you don’t need to deny it - you just claim it was an error, and you’re automatically forgiven.

  • Q. Build servers get compromised, and that’s a fact. Reproducible builds mean proprietary vendors can quickly check if their infrastructure is producing tampered binaries.

I think this is true, but ignores significant trade-offs. The vendor needs to create and maintain two disparate build infrastructures, and then provide additional people privileged access to that new infrastructure. If you don't do this, there was no benefit to reproducible builds because you'd be building the same potentially compromised binary twice.

We know that attackers really do want to compromise build infrastructure, but more often they want to steal proprietary source code, which must pass through build servers.

This means that vendors will increase the likelihood of attacks that really are happening, to prevent an attack that could happen.

That is a significant trade off, and the decision to invest in reproducible builds isn’t as obvious as supporters claim.

  • Q. If a user has chosen to trust a platform where all binaries must be codesigned by the vendor, but doesn’t trust the vendor, then reproducible builds allow them to verify the vendor isn’t malicious.

I think this is a fantasy threat model. If the user does discover the vendor was malicious, what are they supposed to do?

The malicious vendor can simply refuse to provide them with signed security updates instead, so this threat model doesn’t work.

  • Q. Non-reproducible builds violate the GPL, because you can’t produce a bit-for-bit identical binary from the provided source code.

I think this argument is ridiculous, and would mean GPL binaries also can’t use code signing or TLS. Clearly the vendor cannot give you the private keys required to produce the code signatures or the CA roots, so by this argument they also violate the GPL.

  • Q. Whether it’s useful for end users or not, it will allow experts to monitor for compromised build servers producing tampered builds.

I think this is true, but there are other attacks against compromised build servers, all of which are more common than producing tampered builds.

More often, attackers want signing keys so they can sign their own binaries, steal proprietary source code, inject malicious code into source code tarballs, or malicious patches into source repositories.

Reproducible builds don’t help with any of those problems.

Q. A reproducible build is a good quality build. Whether there are security benefits or not, I just want people to do it.

Whether reproducible builds are better quality or not is a matter of opinion, and we shouldn’t be trying to force our opinions on others by claiming it’s for security.

I happen to disagree, and don’t think reproducibility makes a quality build, I think it adds unnecessary complexity.

Monday, July 18, 2016

Just when you thought we couldn't take this any further..., our quest to build a toolkit for interacting with native code directly from bash scripts, has reached version 1.1. Apart from the standard bug fixes and improvements, the major enhancement in this release is automatic structure support.

Wait, what?
First some background, is similar to the python ctypes module, but for bash. If you’ve ever wanted to access native libraries in your shell scripts (libm, zlib, gtk+, etc), or use system facilities (poll, select, setitimer, sockets1, pthreads, etc) - and who doesn’t want that - can make it happen. isn’t a script, it’s a plugin -- bash allows you to load new features at runtime via enable -f I know, who knew?

Here’s a fun demo, a port of the GTK+3 Hello World to bash! Notice that we even generate function pointers to bash functions that can be called from native code, so you can provide callbacks! takes care of translating between bash and native code, and this works really well for simple data types (int, float, strings, etc). Things can get complicated when you need to use a struct * parameter.

Python solves it the same way did - the user has to manually translate the structure into a usable form. In Python you create a class with matching members, and in bash you create an array.

That works, but it’s laborious and not much fun.

Starting from 1.1, most2 of the time we can automatically import structures and create a bash data structure that looks like the native equivalent.

Let’s look at an example, and then I’ll explain how we do it. Here's how you would call stat().


# Define the format of struct stat for bash
struct stat passwd

# Allocate some space for the stat buffer
sizeof -m statbuf stat

# call stat
dlcall stat "/etc/passwd" $statbuf

# Convert result into bash structure
unpack $statbuf passwd

printf "/etc/passwd\n"
printf "\tuid:  %s\n" ${passwd[st_uid]}
printf "\tgid:  %s\n" ${passwd[st_gid]}
printf "\tmode: %o\n" ${passwd[st_mode]##*:}
printf "\tsize: %s\n" ${passwd[st_size]}

(Error checking omitted for clarity, full version here)

All commands have builtin help (use help struct, for example), and there's a wiki with examples and documentation.

There’s enough data in the compiler debugging data for us to reconstruct the original types, so we parse it and translate it into a format that can be used in bash - It’s surprising how well this works!

$ source
$ struct itimerval interval
$ echo ${interval[it_value.tv_sec]}


In future, we expect to be able to automatically import enums, macros3 and parameter types as well. We’re using the fantastic libdwarves behind the scenes, which provides a convenient api for extracting and parsing DWARF data.

This does mean that dwarf data needs to be available, but this is simple on most platforms. There are more detailed troubleshooting steps available here, but in general:

  • On RedHat or CentOS, try debuginfo-install glibc
  • On Fedora, try dnf debuginfo-install glibc
  • On Debian or Ubuntu, try apt-get install libc6-dbg

An interesting problem we had to solve was that bash stores associative arrays as hash tables, and discards the ordering of elements. You can test this yourself, no matter how you assign elements, the order is forgotten.

$ declare -A hello
$ hello[foo]=1 hello[bar]=2 hello[baz]=3 hello[quz]=4
$ echo ${hello[@]}
2 4 3 1
There is no way of recovering or influencing the order of associative array elements4, so this can’t be used for storing structures which must maintain the order of members.

A quick reminder, a hash table has a fixed number of buckets, each bucket is just a linked list. When you insert a new element, it get’s appended to the list at  table->bucket[hash(key) % nbuckets].

Luckily for us, the bash plugin api allows plugins to set the bucket size used when creating an associative array. So, what happens if we make a test plugin that creates a new associative array with the bucket size set to 1?

   entry = make_new_array_variable("hello");
   entry->value = assoc_create(1); // bucket_size = 1
   entry->attributes = att_assoc;

$ enable -f onebucket
$ onebucket hello
$ hello[foo]=1 hello[bar]=2 hello[baz]=3 hello[quz]=4
$ echo ${hello[@]}
1 2 3 4
All elements get appended to the same linked list, and so the order of elements is maintained!

We use this trick to create associative arrays that remember the order elements were assigned, and can export them back to native structures correctly.


Because we can.


Right now!

The new features are documented on the wiki, the new release has been made, there are fresh examples in the test directory and the issue tracker is ready to receive any bugs you find.

And of course, we’re eagerly awaiting your mail asking if this is serious.

  1. Yes, bash has some basic builtin support for sockets, hardly comprehensive support.
  2. Some complicated structures might fail, we’re working on it.
  3. Macros are only included in debugging data if you use cc -g3 or similar. Nobody does this because it makes really big binaries, but we have some workarounds planned.
  4. Well, okay, I guess you could brute force a key prefix to influence the hashes or something.