My Takeaways from ENIGMA 2019

I have two favorite conferences of the year:

  1. AppSec Cali

ENIGMA 2019 ended today, and I wanted to do a quick capture on what I saw and found interesting there.

The conference

For me conferences are all about the combination of ideas, people, and conversation, and that’s what both of these do really well.

First a bit about the conference itself. It’s put on by USENIX, which means it’s an academic conference. That means most of the talks are by Ph.D. types who have a tentacle in the security industry as well. But the conference also features prominent speakers from industry. I’ve not looked at the actual stats, but I’m guessing 75% academics and 25% from industry. Regardless of the actual mix, it feels like a good one.

What I like most about ENIGMA is that it’s single-track, so there’s no FOMO whatsoever. There’s one giant room where all the talks are, so everyone is in session and at break at the same time. This massively improves networking potential, and I’ve had some of my best conversations at the two ENGIGMA conferences I’ve been to so far (out of 4 total).

Talks breakdown

ENIGMA exposes the weaknesses of other conferences by doing certain things really well.

You can basically see the world as a series of spectrums with extremes at both ends. Or at least that’s how I see the world. And conferences have this as well.

Academic conferences are mostly theory, the research takes forever, it’s extremely robust and defensible, and the conclusions are quite modest and muted. That’s my understanding, anyway.

Conferences like DEFCON are the opposite, with wild research, talks that often focus on growing a brand rather than the content itself, and the methods used are usually quite crude compared to academic standards. But in my opinion, the hacker community gets more work actually done through quick trial and error.

Also, both sides secretly want the respect of the other, even though they pretend it would be below them.

It shows me that what’s needed is a move towards the middle in most cases. Hacker types need more rigor in what they do. And academic types need to move faster and be more willing to fail. Both sides can learn from each other.

My personal preference these days for conferences is not showing me how X widget and Y system have vulnerabilities. I feel it’s too easy to find problems in things compared to finding solutions for them. We already know everything is broken. I still like those talks, and find them interesting, but only for a brief moment—like a game of Chess that I can’t (and shouldn’t) remember the next day.

What I really prefer is hearing big ideas about how things are broken and how we can fix them. Causes rather than symptoms. Or about software and solutions that address those big ideas. The perfect conference environment for me would be:

The research part could be substituted with code they wrote (and make available) to go and collect data and/or do a particular task defined in the problem statement.

  1. TED-like presentation of a problem or an idea
  2. A research project or experiment around that idea
  3. A reveal of the results
  4. A brief discussion of what they learned with next steps

Length? 15 minutes. And I want to see 40 of these talks in a conference.

That is how you surface the best ideas, expose new and diverse thinkers to the world, and get good ideas seen by those who can help apply them at scale.

So it’s a combination of slick presentation with technical content, wrapped into a cohesive narrative. ENIGMA is the closest thing I’ve seen to this format, which is why I love it so much.


This year’s offering was fantastic. Here’s what I enjoyed the most:

  • Great conversations with @anthonyvance and @oliikit, @alsmola, @act1vand0, and a bunch of other people who don’t do the Twitter thing and/or like to stay in the shadows.
  • Ran into Bob Lord after his great talk.
  • Met Neha Rungta and Ashkan Soltani after their talks, which I really enjoyed.
  • Got to see a bunch of local friends that I sadly only see at cons.

Favorite talks

So here were my favorite talks.

  1. Abusability Testing, by Ashkan Soltani
  2. Provable Security at AWS, by Neha Rungta
  3. Usage of Behavioral Biometric Technologies to Defend Against Bots and Account Takeover Attacks, by Ajit Gaddam
  4. How to Predict Which Vulnerabilities Will Be Exploited, by Tudor Dumitras
  5. Mobile App Privacy Analysis at Scale, Serge Egelman
  6. Building a Secure Data Market on Blockchain, by Noah Johnson
  7. Insider Attack Resistance in the Android Ecosystem, by René Mayrhofer
  8. Convincing the Loser, by Ben Adida

Interesting tidbits extracted from various talks

  • We shouldn’t discard knowledge because it didn’t come from academia. Mendel had his theories rejected because he wasn’t credentialed in biology, and his work was almost lost.
  • The demo in Ajit Gaddam’s authentication talk was really excellent. The whole time I was listening I was thinking about my post on Continuous Authentication from 2015.
  • You can use TrickURI for checking how your code handles various URIs.
  • Stethescope is a tool that NETFLIX uses to check a client’s configuration before it can access certain things.
  • The study that was done on who found the most vulnerabilities showed that the personality trait of openness was more predictive of success than having more training or having better cognitive performance. This to me sounds a lot like other advice I’ve heard that says to hire for high IQ and train from there. Especially for security people, since it’s all about curiosity and discovery. What this did was narrow that down to a particular OCEAN trait—Openness to Experience.


If you’ve not been to ENIGMA, and you like big ideas more than the party and entertainment culture of like 3/4 of the conferences these days, you need to add this to your list for 2020. It’s January 28th-30th at the San Francisco Hyatt.

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
My Takeaways from ENIGMA 2019

Unsupervised Learning: No. 162 (Member Edition)

This is a member-only even episode. Members get the newsletter every week, as well as access to all previous episodes, while free subscribers only get odd episodes every other week.

Become a member to get access immediately

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
Unsupervised Learning: No. 162 (Member Edition)

Machine Learning’s Effect On Humanity Will Be Magnifying Our Successes and Failures

AI—and specifically machine learning—are going to empower humans the way a futuristic exoskeleton would empower a 4-year-old.

When they want something from the kitchen cabinet—they’re going to get it. And if they don’t want to brush their teeth, it’s not going to happen.

And in the process of expressing these concerns, the kitchen, bathroom, and perhaps the entire house will be destroyed.

That’s humans with machine learning. It’s a force multiplier. An exoskeleton. The granting of super-powers.

I am not a math Ph.D, or an AI expert, but after reading around 15 books on all different types of artificial intelligence—past, present, and future—I think machine learning is going to greatly magnify our successes and failures as humans. It’s going to make everything we do more extreme—for good, and for evil.

Humans getting access to machine learning is going to produce realities like Black Mirror and Star Trek the Next Generation, with very little in between.

The problem is that we as humans are experimenters. We try things. We try economic systems. We try safety programs. We try social incentive programs.

But we’re often sloppy and wrong, so even if something was meant to cause harm, it seldom works so well that it can inflict maximum damage before someone notices.

With AI/ML, companies and governments will be able to launch half-baked ideas—just as they always have—that work extraordinarily well. Too well.

These results will still require skill and intent to interpret and misuse, but we should assume both of those will be in abundant supply.

  • We’ll launch marketing campaigns designed to gather information on people and determine their preferences, and algorithms will come back with answers for how to manipulate them politically.

  • We’ll ask who is most likely to commit crimes, and algorithms will come back with lists of our least fortunate.

  • We’ll ask how to improve designs we’ve had for hundreds of years, and the algorithms will surpass human ingenuity in minutes.

What this means is that every mistake we make will be magnified, accelerated, and perfected—automatically. And the potential for this to produce dystopian power structures cannot be overstated.

I wish I were saying this so that people will read this, learn, and be more cautious.

But they won’t.

Many books have been written on the topic of AI, and many of them call for caution in building this potentially civilization-ending technology.

But there’s no one to listen. We are not a people. We are not a government. We are not a world government with a unified people.

We are a collection of market-driven companies trying to win, and that means we will act independently—in our own interest—to beat out our competitors.

That’s how Black Mirror gets made in the United States, without an overlord government like in China. In China they make it on purpose, and in the U.S. it gets made because it’s effective at accomplishing things and therefore makes people money.

Either way you end up with Black Mirror.

But that raises the question: how can we get ST:TNG instead?

I think the only option is to win a series of very precocious races. In short, we have to get lucky.

We basically need to continue to grow in intelligence, blend with technology through implants, create some semblance of AGI, and have a series of really bad failures—but not so bad that they destroy us.

So, dystopian societies where everyone kills themselves. Or where they create a bot army to try to take over the world. Etc.

We need a series of serious but small mistakes, in other words, to show us the destructive potential of missteps while holding vorpal scissors.

If we can make enough of those to learn from, but not so many or so large that we get destroyed, we might be able to improve ourselves to the point where we’re responsible enough to wield the power of machine learning, reinforcement learning, and evolutionary algorithms without erasing ourselves in the process.

So it’s a race between the power of the ML we can create, vs. our own maturity.

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
Machine Learning’s Effect On Humanity Will Be Magnifying Our Successes and Failures

DNS Servers That Offer Privacy and Filtering

, A DNS Primer

If you’re a programmer, a systems administrator, or really any type of IT worker, you probably have your favorite go-to IP addresses for troubleshooting. And if you’re like me, you’ve probably been using the same ones for years.

Such IPs can be used for:

  • Testing ping connectivity
  • Checking DNS resolution using dig or nslookup
  • Updating a system’s permanent DNS settings

Most DNS servers allow you to ping them.

I like using DNS servers for this because you can use them for both connectivity and name resolution testing, and for the longest time I used the Google DNS servers:

…but they don’t have any filtering enabled, and in recent years I’ve become less thrilled about sending Google all my DNS queries.

Cisco bought OpenDNS, which is where Umbrella came from.

Alternatives to Google DNS

At some point I switched to using Cisco’s Umbrella servers because they do URL filtering for you. They maintain a list of dangerous URLs and block them automatically for you, which can help protect from malware.

The OpenDNS servers are great, but I always have to look them up. Then, a few years ago, a new set of DNS servers came out that focused not only on speed and functionality, but also memorability.

One of the first easy-to-remember options with filtering that came out was IBM’s Quad 9—which as you might expect has an IP address of four nines:

I figured they were being overwhelmed at launch time, or their filtering wasn’t tweaked yet.

I tried to use Quad9 one for a bit when it first came out, but found it a bit slow. I imagine they have probably fixed that by now, but more on performance below.

Enter CloudFlare

So with Google, Cisco, and IBM providing interesting options with various functionality, we then saw CloudFlare enter the arena.

But rather than provide filtering, they instead focused on privacy.

Some other recursive DNS services may claim that their services are secure because they support DNSSEC. While this is a good security practice, users of these services are ironically not protected from the DNS companies themselves. Many of these companies collect data from their DNS customers to use for commercial purposes. Alternatively, does not mine any user data. Logs are kept for 24 hours for debugging purposes, then they are purged.

CloudFlare Website

And perhaps coolest of all for me was their memorability rating, which is basically flawless: abbreviates to 1.1, so you can literally test by typing ping 1.1.

How cool is that?

So with them they’re not filtering your URLs, but they are consciously avoiding logging or tracking you in any way, which is excellent.

Norton ConnectSafe DNS

Norton also has a public DNS service, which has an interesting feature of multiple levels of URL content filtering.

Block malicious and fraudulent sites

Block sexual content

Block mature content of many types

My recommendation

Performance also matters here, and that will vary based on where you are, but in recent testing I found all of these options to be fairly responsive.

To me it comes down to this:

  • If you care about privacy and speed and maximum memorability, I recommend CloudFlare:

I find the filtering claims by both companies to be too opaque for my tastes, with both of them feeling like borderline marketing to be honest.

  • If you want URL filtering I recommend Quad9 over Umbrella simply because it’s easier to remember and seems to focus on having multiple threat intelligence sources.

  • And if you want multiple levels of URL filtering, you can go with the Norton offering, but I think I personally prefer to just use Quad9 for that and be done with it. But I think Norton is still a cool option for like protecting an entire school or something by forcing their DNS through the strictest option.


Final answer—if pressed—here are the two I recommend you remember.

  1. For speed and privacy:
  2. For filtering:

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
DNS Servers That Offer Privacy and Filtering

We’re Heading for Something Ugly in 2020

Art By Aaron Blanco Tejedor

I’ve had a sense of foreboding for the last month or so about the mental health of the United States. And sure—the Trump situation is raising pulses, but it’s more than that.

It’s a building tension that I feel coming from everyone—especially online—from being forced by current events into extreme versions of ourselves. It’s like the country is a giant piece of strong but brittle steel being bent closer and closer to the snapping point. It’s creating a piercing sound that everyone can hear and can’t get out of their head.

In broad strokes, we are polarizing and the center is disappearing—both economically and ideologically. 50 years ago we would pick between something like a Romney and a McCain character for president, which were basically the same because the country had shared goals.

Today, everyone is getting forced to pick between socialism and fascism, and if you strike a single wrong note in a conversation with someone, you’re assumed to be their opposite.

A loose visual characterization of the extremification of our discourse SOURCE

Nuance and exploration in a political discussion today lasts around 40 seconds. Then as soon as someone hits one of the keywords—like “free speech”, “me too”, “the wall”, etc.—the conversation becomes a garbage fire. Each side labels the other, automatically figures out the 39 other beliefs that they probably hold, and then figures out how to exit the conversation and go tell their in-group that they encountered another enemy in the wild.

Labeling and ostracising has always been present in politics, but in the last year it’s become pathological.

One place I see it very prominently is in the information security community that I’m a part of, with Twitter as the courtroom of choice. As I write this, we’re in the middle of the worst community conflict I’ve seen going back to 1999. It’s not separate from the bending steel bar that is our country right now—it’s a characterization of that stress through a particular lens. In this case, a lens of a Cybersecurity industry with serious gender imbalance and male chauvinism problems.

As a result we have every national-scale debate taking place within our community as well, which feels magnified ten times because you see and know the people being affected. Some of the arguments I see on both sides include:

  • Women can’t get respect in the industry
  • It’s a boy’s club
  • It’s a bunch of immature GamerGate types playing with more expensive toys
  • You can’t get accepted to speak if you’re a woman
  • Men (both customers and talent) don’t respect female security skills
  • There’s too much focus on technical skills and not enough on other abilities
  • InfoSec/CyberSecurity needs more diversity
  • You’re either a vocal advocate of these things or you’re part of the problem
  • It’s all smiles in public, but it’s a nasty frat house behind the scenes
  • The female population’s interest in tech and security is lower than the percentage of women in IT/Security, so there’s no imbalance. It’s natural, in other words
  • Even if the imbalance is somewhat natural, there’s still far too much discrimination, so we’re not yet at the natural numbers
  • Women can get accepted to conferences just by being a woman, and their content doesn’t even have to be as good as for men
  • There’s a #metoo shaming mafia that’s out of control
  • The #metoo movement was overdue for InfoSec, and it’s good that it happened
  • People are afraid to speak their minds for fear of being labeled something, and we should blame women and “SJW’s” for that
  • The community is going underground again because it’s too risky to have honest/interesting conversations in public in this political climate

Of course, if someone is wearing a swastika, pick a side.

What I find most interesting about these types of argument lists is that people think they need to pick a side. Stop trying to pick a fucking side. Reality doesn’t have sides. Reality is multi-dimensional. Almost everything being argued by anyone is probably true to some degree. The question is how much, and how that ranks against the other simultaneous truths.

What if we just accepted the entire list—even the crazy ones—and said, “ok, that’s probably true to some degree—and then moved on to an empathic exploration of the shades of grey? Wouldn’t that be something?

Imagine that world. Where we could start by saying I agree that all your points have some merit to them, and you accept that all mine have some degree of merit, and then we had a nuanced discussion from there.

Did you know that Utopia doesn’t just mean an ideal place? It also means an impossible place.


The InfoSec community is eating itself right now, and so is the entire country. The list of arguments at the national scale is even more interesting:

  • #metoo was needed to correct an imbalance
  • #metoo went too far and is causing damage right now
  • #metoo has more work to do; it’s not fixed yet

Let me just pause right there. Which of those is correct?

buzzer sound

Trick question—they’re all true. Let’s keep going:

  • White people are in more positions of power and therefore have more responsibility and should be willing to absorb more discomfort during this adjustment process
  • Free speech is the closest thing to a secular commandment that we have in this country, and we should protect it at all costs
  • Free speech includes letting people you don’t agree with speak in public, and in a functioning democracy you don’t get to demand someone not be allowed to speak because something they say might offend you
  • White people are acting like snowflakes when they’re called out for being politically incorrect
  • Disagreement is not aggression
  • People should care how their words affect others
  • Disagreement is not assault
  • The alt-right is flirting with—and in many cases married to—outright facism and racism, and many of those types are using this “free speech” baton to try to spread their poison to the masses
  • It’s time for a major pendulum swing towards socialism, to correct for our worship of the free market
  • We want to make sure that pendulum doesn’t go too far, because it’s clearly been demonstrated that socialism doesn’t work
  • We need universal healthcare, and it’s ok to tax people with tens of millions of dollars in the bank to help make it happen
  • Poor and older white people are worried their country is being replaced by non-whites, and they’re supporting fascist politicians either knowingly or unknowingly in order to oppose that motion

More vitriol from Twitter

  • There’s a strong anti-white sentiment in the country that’s not healthy for anyone—including minorities
  • There’s a strong anti-white sentiment in the country that’s needed—at least in moderation for a period of time—to lift minorities and rebalance the future global equation in terms of internal pride and representation
  • Masculinity is toxic
  • Masculinity is natural, and 95% of the world thinks the liberal parts of the US and Europe are losing their minds about the issue
  • 95% of the world is simply lagging behind the progress on gender being made by the US and Europe
  • The genders are naturally different, and that’s ok, we can work with the differences and keep everyone happy
  • The genders are identical, and any indication of difference is evidence of culture poisoning and programming that harms outcomes
  • The poor should experience some suffering so they’re encouraged to break out of it
  • The poor need our help and it’s our responsibility to help them
  • The poor need opportunity but not a guarantee
  • Libertarianism is shirking away from the role of luck and circumstance, and therefore the responsibility to help others less fortunate
  • This all started when we lost Jesus in the classroom
  • When we lost our central agreement on morality, that’s where it all went astray

The ideas of race-based nationalism, for example, or that equal rights for women was a bad idea.

Again, unless someone is saying something overtly wrong and hateful—we don’t have to pick a side—and we shouldn’t.

The thinking person’s responsibility is to imagine the world not as ones and zeros, but as slider bars of nuance, subtlety, and complexity. If you were to respond to the list above in terms of what you agree with, you should end up with a heat map, not a series of x’s and check marks. But that’s a utopia—at least for now. I think we’re heading for a dark place in 2020.

I try to avoid hyperbole these days, but I honestly don’t think we should strike “Civil War” from this discussion’s vocabulary. Not like before of course, but a modern version.

I see a world where people believe some collection of the arguments above, and they’re triggered by the others they don’t believe. It’s symmetrical stupidity—belief in one, and trigger in its opposite. And this has permeated into the very claims about reality itself.

So when Muller releases his report saying Trump is compromised by Russia and colluded with them to help get elected, and that he’s been taking actions on Russia’s behalf to cause harm to the United States—millions of white people will literally see this as Hilary Clinton and George Soros committing some conspiracy against their rightful ruler of the country.

The reason this is so ugly is because it’s ultimately about resistance to change regarding the biggest issues in society—work, race, and gender.

These are the trends that are driving the overall tension.

  • White people are becoming the minority in the U.S. Link
  • Women are becoming more successful, while poor men are suffering Link
  • Automation and AI is taking more and more jobs, mostly from poor white men Link
  • The rich is separating from the poor Link

I think many racists and sexists are so without knowing it, and find other outlets to say it without saying it—like nationalistic slogans. Many aren’t dishonest, but rather confused.

It’s not just one thing. These all combine to make millions of white people extremely angry, and in a way that I don’t even think they can understand or articulate. I think it’s how you explain them hating Obama when he had a perfect family, no major personal scandals whatsoever, while one of the most proudly amoral men in the world gets massive support of white Evangelicals.

Possible population crossover rates

The point in all of this is that I’m seriously worried it will come to violence.

Not French Revolution, but many small to medium-sized protests similar to what happened in Charlottesville. And as a bonus, we have Putin doing his best with information warfare to make it happen.

There is so much change—happening so fast—that part of our population is lagging behind, feeling disrespected, and is willing to vote for anyone who says they can make the pain go away.


  1. Tensions are higher than ever, and the moderate voices have been replaced by extremists on both sides.
  2. There are multiple conflict points—from race, to gender, jobs, to economic status.
  3. This is clearly visible in the InfoSec community, as a microcosm of the country overall.
  4. All of the tension is being echoed and magnified by social media, with the added inertia of an active Russian information warfare campaign.
  5. Trump leaving office—by either resigning or being forcefully removed—could be the tipping point for actual violence.


  1. When Michael Moore said he saw so much small-town support for Trump, and that we needed to see his election as a real possibility, everyone laughed. Including me. I see this as the same sort of thing, although I have less anecdote to go on. But I do think the general tension seen in social media right now is a similar signal to what Moore was seeing in 2015.
  2. I don’t know much about the UK situation, but this feels 90% applicable to the Brexit situation as well. In both cases it’s about the size and pace of change for an aging majority that’s watching its influence diminish.
  3. January 27, 2019 — Changed the description of my graphical image to reflect that it’s a characterization of multiple studies and not pointing to a particular dataset. Someone said it was biased, but since it’s symmetrical I don’t see how that’s possible. I assumed it was common knowledge that multiple studies have shown our beliefs and discourse becoming more extreme.
  4. January 27, 2019 — Changed Charleston to Charlottesville.

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
We’re Heading for Something Ugly in 2020

Unsupervised Learning: No. 161

Unsupervised Learning is my weekly show where I spend 5-20 hours finding the most interesting stories in security, technology, and humans, which I then curate into a 30-minute podcast & companion newsletter.

The goal is to catch you up on current events, show you the best content from around the web, and hopefully give you something to think about as well.

Subscribe to the Newsletter or Podcast

?️ Security News

The U.S. and other western countries are selling surveillance technology to authoritarian countries they know will use them for harm. This is very much like exporting weapons in the past—which became illegal for this exact reason. Fundamentally, surveillance technology and weapons are both control mechanisms, and that is how the spirit of the law should characterize them. Whether you’re selling missiles or facial recognition technology, the goal is the same—giving certain governments the ability to maintain control of their populations while not giving it to others. That’s a policy decision based on who you want to see win, with the added element of likely being willing to sell to most anyone if the money is exorbitant enough. My takeaway: we should be thinking of these surveillance and monitoring technologies as no different than other types of weapons, and should be very cautious about who we’re enabling to squelch their own people. Link

The president of the United States evidently wants to pull the United States out of NATO, which would be the epitome of Putin’s goals. And we have no way to know if Putin actually directed this action in person, since our president also demanded that records of their face-to-face conversations be destroyed. Like I’ve said before: we’re living in an actual spy novel.

Someone broke into an SEC database and made millions selling stocks based on insider information. This database had future filings, called “test filings” that included upcoming mergers, acquisitions, and other key information that gave the attackers an advantage in trading. They made around $4 million dollars with the information. Link

The Pentagon has released a major report plainly stating that climate change is a threat to the security of the United States. It details the various effects that come from climate change, and how they will affect our various bases and capabilities. Link

A California judge has ruled that authorities cannot force suspects to unlock their mobile devices using biometrics. The argument was that it would violate Fifth Amendment protection against self-incrimination. Sounds logical to me. Link

A province in China is launching a WeChat app that shows you people (including their personal information and national ID number) within 500 meters that owe money. Why? So you can shame them. It’s part of the overall Chinese social credit system that punishes bad behavior and rewards good behavior—with the definitions of good and bad being defined by the government, of course. It’s like China took Black Mirror and 1984 and used them as architecture documents. Link

Russia is evidently using LinkedIn as a tool for information gathering on U.S. people of interest. This isn’t surprising to me, as we’ve also seen China doing the same thing. It’s not so much that it’s LinkedIn as that it’s a place where important people maintain updated and detailed information about themselves. If you’re a potential target of any kind of corporate or other types of espionage, keep in mind that your LinkedIn profile can reveal a lot about you, and that it’s already being used by two of our main adversaries. Link

There is significant evidence that North Korea’s bio-weapons program is active and thriving, and many think their danger from biological weapons is greater than from nuclear. Link

Google has been fined $44 million dollars for violations of GDPR by a French company claiming that Google didn’t sufficiently provide information on the data it collected as part of its ads program. Link

The Girl Scouts now have a Cybersecurity badge, which is a move designed to get more girls interested in STEM. Love it. Link

Advisories: BlueHost

Leaks: BlackRock Advisor Data, FBI Data (3TB)

⚙️ Technology News

Netflix is raising its prices. 8->9. 11->13. 14->16. Link

Google is closing Hangouts in 2020. This solidifies a clear lesson I’ve picked up over the last several years: Don’t adopt new Google products. They either have abysmal UIs (have you seen GMail lately), or are basically run like half-baked experiments doomed to be shut down in couple of years (or both). Google Reader was amazing, but they killed that too. At this point they just seem to have an R&D team that throws out ideas. Then they go build it with the same ridiculous interface that lead to the downfall of all the other products, do a big announcement, and then they watch it die for a few years before they discontinue it. It’s remarkable how predictable it is. Link

Google is buying Fossil’s smartwatch technology, which feels to me like they’re launching the next version of Google+. Link

Google is rolling out Material Design on Google Docs, Sheets, Slides, and Sites. I give Google a lot of crap, but I’m glad to see them consolidating their interface, and I think Material is the best thing they’ve made so far. Link

CERN is looking to build a new collider that’s four times the size (and 10 times the power) of the LHC. Link

Netflix says they get beat in ratings by Fortnite more than by HBO. Link

DJI—the leading drone manufacturer out of Shenzen China—has fired dozens of employees for fraud and said that the damage they did will result in around $150 million in losses for the company. They were evidently inflating the cost of parts and taking the extra for themselves. Link

??  Human News

Americans are now more likely to die of opiates than in a car accident. Link

China is experiencing its slowest growth in 28 years, which is having an economic impact around the world. The good news is that while their exports are slowing, their consumption is increasing, which helps other countries providing services to their new upper classes. Link

There’s an interesting breathing technique that’s talked about in this article: breathing in for 5 seconds, and breathing out for 5 seconds—for 5 minutes. Link
60% of the world’s coffee plants are very close to extinction. This is one crisis I’m not too worried about actually, because if it actually starts causing a drop in coffee availability the whole world will basically react overnight to fix the problem. Climate change? Meh—probably a hoax, they say. But threaten their daily coffee and we’ll go to space and build a new planet from scratch designed for nothing but growing coffee. Link

China has a massive number of empty homes (20%, or 55 million), and if people figure this out and realize it means prices are inflated, it could lead to a massive selloff that seriously damages the Chinese economy. Link

Never forget that 50 years ago the sugar industry paid scientists to blame fat, which has helped cause an epidemic in obesity today. If you want to find evil, look for quiet influencers with lots of money who whisper into the ears of people we trust (see lobbyists). Link

China has confirmed the birth of two gene-edited babies, and the presence of another woman who’s still pregnant with another. The scientist who ran the study is in Chinese custody for violating regulations. I can’t help but think that if the kids die or make China look bad he’ll be punished, but if they start shooting lasers out of their eyes or learn Calculus before age 3 the guy will be a national hero. Link

“Between 1983 and 2016, the median Black family saw their wealth drop by more than half after inflation, compared to a 33% increase for the median White household. The median Black family today owns $3,600 — just 2% of the wealth of the median White family. The median Latino family owns $6,600 — just 4% of the median White family.” Link

? Ideas, Trends, & Analysis

Those Bashing Smart Locks Have Forgotten How Easy It is to Pick Regular Ones — In this essay I do a basic threat model on smart locks using various target neighborhoods and attacker types. Link

My favorite simplified definition for Artificial Intelligence is: “Any technology that can do what previously could only be done by humans.” This neatly incorporates the whole range of what we think of as AI—from facial recognition, to cancer diagnosis, to an AGI like Her or Skynet. So it doesn’t matter how trivial or specialized it is—if it could previously only be done by Homo Sapiens (and not any other kind of tech), then it qualifies as AI for most practical purposes.

In a spot of good news, book sales are up, and physical books are doing really well. In the U.S., independent bookstores grew by 35% between 2009 and 2015. Link

? Discovery

The best-selling fiction books of all time. Link

The FBI’s full file on MLK Link

? a16z’s Joel de la Garza’s Notes on Security in 2019 Link

This is an ICS Security Assessment Scorecard. Link

HyperScience is a startup that makes human-readable data and makes it machine-readable. This is how AI replaces jobs. It’s not through Skynet—it’s hitting a tipping point regarding hundreds of monotonous tasks that previously could only be done by humans. Link

A visualization of which rappers had the largest vocabularies. Link

How to detect hidden cameras and spy gear. Link

A Bash Scripting Cheatsheet Link

PortPush — A Bash utility for pivoting inside a compromised network. Link

Domained — A multi-source domain enumeration tool with EyeWitness integration. Link

? Notes

? Major announcement this week: I am now going to be doing a podcast for every episode—both member versions and regular versions. I was previously only doing every other week, but now every member episode will have a podcast with it as well. The podcast will be embedded in the blog post for each member episode, since I have no way of doing authentication in a regular podcast feed. Here’s last week’s member episode as an example, complete with its own accompanying podcast. Now subscribers can get the content every week either by newsletter or via audio! Link

Follow me on Feedly Follow

Do me a favor and go rate the podcast for me on iTunes. Link

Currently Reading: This Will Make You Smarter
Up Next to Read: Industry of Anonymity, The Master Switch, The Daily Stoic

?️ Recommendations

This is a portable, fold-out solar energy charging system that can be used to charge electronics via USB, or even a car battery. Link

A Security and Privacy checklist. Great for friends and family not in security or tech. Link


“None of us are getting out of here alive, so please stop treating yourself like an afterthought. Eat the delicious food. Walk in the sunshine. Jump in the ocean. Say the truth that you’re carrying in your heart like hidden treasure. Be silly. Be kind. Be weird. There’s no time for anything else.”

~ Anthony Hopkins

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
Unsupervised Learning: No. 161

Those Bashing Smart Locks Have Forgotten How Easy it is to Pick Regular Ones

There’s currently a major backlash in the InfoSec community against so-called “smart” locks.

And it’s not just by people who naturally overreact to change, or from people outside of InfoSec: there are plenty of smart people in our field—whom I respect greatly—that are making loud noises against this technology. So I want to make it absolutely clear that 1) there are smart arguments on both sides of this discussion, and 2) I’m still open to being wrong about this.

People picking locks at an Infosec event

My main argument isn’t that smart locks are great security, or that IoT security is fine, or that the new is always better than the old—or any of that garbage. I lead The OWASP IoT Security Project—-which details precisely how insecure IoT systems can be, and I’ve personally been finding these kinds of flaws in real devices for over a decade.

IoT Security is still a garbage fire, and we’re just starting to get our collective arms around the heat and smell of the problem. So you’re not going to hear the “IoT Security is Great” argument from me. What you’re going to hear from me is an appeal to practicality and risk management by asking a very simple question:

What’s easier for your most likely attacker to bypass—a regular lock, or a smart lock?

I’m not a lockpicking expert, by the way, but that’s kind of the point. You don’t have to be to get past most locks.

Many people in InfoSec have been to security conferences where they’ve learned in a matter of minutes how to open most locks with basic lockpicks. And once you’ve practiced for any period of time you can open the most commonly used house locks in mere seconds. And that’s not even talking about bump keys, snap guns, or any of the other more advanced techniques that are now available.

I’m not saying smart locks have good security—I’m reminding people that regular locks have virtually none.

The problem I see in this discussion is that we’ve somehow—in the information security community—forgotten about the risk that we have already accepted. For hundreds of years we’ve protected our homes and businesses with lock technology that’s absolutely trivial to bypass for anyone who spends even the slightest effort.

Threat modeling brings reality into focus

This is why threat modeling is so important: it allows us to move away from the open-ended theoretical discussion and into the world of the scenarios you’re most likely to face. So let’s do a few scenarios and see where we end up.

I honestly ran through this threat model not knowing where it would take me, which is why they’re so valuable.

  1. You live in a middle-class neighborhood in Middle America, most of the people in the neighborhood are teachers, cops, DMV workers, and just regular folk. There’s very little violent crime, but there have been some home break-ins recently due to the opiate crisis.
  2. You live in a high-tech housing area in a big city full of the smartest tech workers in the world, e.g., one of the Facebook housing units in Menlo Park, or a similar place in Seattle or Austin.
  3. You live in a very established and nice neighborhood with an active neighborhood watch, where everyone knows everyone else, and all crime is very rare.
  4. You live in a high-property-crime area in a big city. It’s supposed to be getting better soon as people work on clean-up efforts, but cars and homes are getting broken into often, with people finding things stolen, drug paraphernalia left behind, etc.

I think these are decent approximations of the bulk of people’s living situations in the United States. So for each of these attack scenarios, let’s ask ourselves: who the attacker is, how they’re getting into the house, and whether they’re going to have more, less, or equal success facing a regular lock or a smart lock.

I’m also not a property crime expert, so feel free to correct me if you are.

Now let’s model some attackers:

  • Drug-addicted, looking for anything to sell or trade for opiates
  • Mid-tier professional thieves who go after mid-tier neighborhoods for TVs, computers, jewelry before moving on or getting caught
  • High-end professional thieves who go after jewelry, art, and the content of safes
  • Neighborhood kids looking for opportunities for fun or to find something to sell for quick cash
  • Disgruntled delivery people
  • Roving package thieves

Now let’s start combining these targets with their potential attackers in likely scenarios:

Addict as threat actor

  • Addicts vs. Middle America: Why not kick in the door or break a window?
  • Addicts vs. High Tech Area: Why not kick in the door or break a window?
  • Addicts vs. Super Nice Area: Why not kick in the door or break a window?
  • Addicts vs. High Crime Area: Why not kick in the door or break a window?

Mid-tier Professional as threat actor

  • Mid-tier vs. Middle America: Pick the lock, go through window, kick door
  • Mid-tier vs. High Tech Area: Pick the lock, go through window, kick door
  • Mid-tier vs. Super Nice Area: Pick the lock, go through window, kick door
  • Mid-tier vs. High Crime Area: Pick the lock, go through window, kick door

For the mid-tier and above, if everyone in a neighborhood had a basic smart lock with a pin pad set to the same thing or something, that could be a good way in.

High-end Professional as threat actor

  • High-end vs. Middle America: Mismatched target, unlikely to attack smart locks
  • High-end vs. High Tech Area: Possible smart lock attacks to steal computers or intellectual property
  • High-end vs. Super Nice Area: Possible smart lock attacks, but easier entry available
  • High-end vs. High Crime Area: Mismatched target, unlikely to attack smart locks

Neighborhood kids as threat actor

  • Neighborhood kids vs. Middle America: Possible smart lock attacks, if it’s trivial
  • Neighborhood kids vs. High Tech Area: Mismatched target, unlikely to attack smart locks
  • Neighborhood kids vs. Super Nice Area: Possible smart lock attacks, but risk is probably not worth it
  • Neighborhood kids vs. High Crime Area: Why not kick in the door or break a window?

Disgruntled delivery person as threat actor

  • Disgruntled delivery person vs. Middle America: Possible, but they already have tons of opportunity to break the law and steal merchandise every day, and they’re more likely to avoid chances to be caught (logs, video, etc.)
  • Disgruntled delivery person vs. High Tech Area: Possible if easy enough, since the payoff might be sufficient? But why not pick the lock?
  • Disgruntled delivery person vs. Super Nice Area: Possible if easy enough, since the payoff might be sufficient? But why not pick the lock?
  • Disgruntled delivery person vs. High Crime Area: Other options seem better than spending time on hacking the smart lock.

Package thieves as threat actor

  • Package thieves vs. Middle America: Seems like they’d prefer to pick a lock vs. mess with a smart lock location because smart lock people are likely to have a camera in or around the house.
  • Package thieves vs. High Tech Area: Seems like they’d prefer to pick a lock vs. mess with a smart lock location because smart lock people are likely to have a camera in or around the house.
  • Package thieves vs. Super Nice Area: Mismatched target, unlikely to attack smart locks (you don’t go to Beverly Hills to break into homes and steal their Amazon packages)
  • Package thieves vs. High Crime Area: Other options seem better than spending time on hacking the smart lock.

Whew. Ok. That was fun.

So we did see a few situations where attacking a smart lock could make sense, assuming they were similar enough for it to be quick and easy—like in a housing complex for tech employees (in say 5 years). But in most cases I think this cursory analysis shows that either:

  1. It’s easier to kick in the door or go through a window (or pick the lock) than it is to even look at a smart lock, or…
  2. Attackers are likely to start associating smart locks with video capture, remote monitoring, and lots of other IoT-ish functionality, which will mean a higher risk of getting caught.


  1. There are smart people arguing against smart locks, and it’s a discussion worth having.
  2. IoT Security isn’t good, and so neither is smart lock security.
  3. We forget the risks we’re used to—and in this case the thing we’re forgetting is now easy it is to pick a lock or kick in a door.
  4. If you run through the scenarios you’re likely to face, you’ll see that very few of them make it easier to get into your home because you have a smart lock, since you either have a threat actor mismatch or the alternatives are simply better for the attacker.
  5. The fact that people with smart locks are likely to have cameras and other smart security technology is likely to become a significant deterrent soon—if it hasn’t already—which will result in more people just moving on to the next home instead of messing with your house.

I think all of this combines to this:

If a smart lock gives you significant features and convenience—above what you get with a normal lock—then it’s likely to be more than worth the risk tradeoff.

This is because it won’t be about replacing good security (old lock) with good security (IoT lock), but rather about replacing no security with something equally bad that’s more convenient and that might be a deterrent as well.


  1. The front door is bad overall security for a home, and just changing your lock don’t solve that.
  2. To me, the best argument against smart locks is that with a common door you already have the weakness of the existing conventional lock, and if you are still able to access that regular lock mechanism when you install a smart lock then you’ve simply *added* attack surface rather than changed it. In other words: you’ve now combined the weakness of the legacy lock with the weakness of the smart lock, which can only make the overall result weaker.
  3. If you can think of other abuse cases that I haven’t included, please let me know. For example, how to combine the internet vector with the local/physical side.
  4. The other complication with this discussion is that there are many potential combinations of the basic control variables, e.g., the strength of the door structure vs. being kicked in, the ease of entering via windows compared to the door, and the type of crime in the area in question. It’s much different, for example, if we’re defending against young, smart rich kids who might break in and steal the Amazon packages in your foyer while your maid is upstairs, or if you’re defending against frazzled adults looking for meth money.
  5. I’d also say that there are many “smart” locks that are just trivial to bypass because they’re just garbage. And in those cases it just requires that knowledge to take advantage, and there I’d side with them being *worse* for security. But I think those are likely to be spread out and hard to take advantage of.
  6. Another key point is that it’s possible to oppose smart lock implementation without even arguing that they’re easier to bypass. I have a friend/associate who is currently being told that her whole building is being “upgraded” to smart locks, which will be able to log everything and send those logs to some random startup, without her knowledge or transparency. In short, there are privacy concerns in addition to the larger physical access question.

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
Those Bashing Smart Locks Have Forgotten How Easy it is to Pick Regular Ones

Summary: The Dichotomy of Leadership


My book summaries are designed as captures for what I’ve read, and aren’t necessarily great standalone resources for those who have not read the book.

Their purpose is to ensure that I capture what I learn from any given text, so as to avoid realizing years later that I have no idea what it was about or how I benefited from it.

My One-Sentence Summary

The best leaders manage the balance between the extremes of a few core attributes: confidence and humility, discipline vs. creativity, mentoring vs. firing, training hard vs. smart, leading and following, and empowering vs. micromanaging.

Content Extraction

A few of the major leadership attribute spectrums

  • Empower others, but be willing to step in with micromanagement temporarily if things get out of hand
  • Don’t be so dominating and intimidating that your leaders can’t step up and lead themselves
  • Discipline is great, but too much leads to a lack of creativity
  • Too much creativity and not enough discipline leads to sloppiness and mistakes

All quotes here are from the book itself unless otherwise indicated.

“The three components of great training are realism, fundamentals, and repetition.”

  • Plan enough to provide comfort when things go wrong, but not so much that you limit creativity
  • Train in a very realistic way, but don’t overtrain in a way that removes their motivation or overwhelms them with information
  • Be confident and willing to push through your strategy with others, but also remain humble about the ability for the world to serve you lessons, and about your team’s ability to teach you things you don’t know
  • Ensure that when you’ve gone too far in one direction in these attributes, that when you correct you don’t overcorrect by going too far in the other direction


  1. The whole book is about constantly adjusting the balances between extremes across multiple leadership attributes.
  2. The key is to be aware of what those attributes are, what their extremes are, and what the downsides are of going too much in each direction.
  3. Great leaders, in other words, are those who can maintain optimal levels of each attribute according to what’s needed to best accomplish the mission in the short and long-term.

You can find my other book summaries here.


  1. There is a previous book by the same two guys, called Extreme Ownership, and while it was good, it did emphasize the extremes of each point that was made. This book corrects that by focusing everything on the balances that have to be constantly adjusted for the situation. This is basically the better version of the first book, but you can still benefit from the first one as well.

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
Summary: The Dichotomy of Leadership

Unsupervised Learning: No. 160 (Member Edition)

This is a member-only even episode. Members get the newsletter every week, as well as access to all previous episodes, while free subscribers only get odd episodes every other week.

Become a member to get access immediately

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
Unsupervised Learning: No. 160 (Member Edition)

Why I Think the NSA is Releasing a Free Reverse Engineering Tool This Year at RSA

The NSA is releasing a free reverse engineering tool this year at the RSA security conference in San Francisco.

Many in the security community—who have an understandable and healthy distrust of the NSA—are wondering if there could be a backdoor in the software, if they’re using it to spy on people, etc. The various theories are interesting reading.

And reducing the loss of talent they already have.

I think the answer is much simpler—they’re using the release of the tool to inject some goodwill into the community in hopes of attracting new talent.

In short, it’s all about recruiting.

Between Snowden, the ShadowBrokers leaks, and the damage caused by EternalBlue and NotPetya, I’m guessing morale is at a dangerously low level and they need to do something to raise interest and motivation for working there.

Releasing an open-source tool to help people do reverse engineering, while simultaneously training people how to be good guys and gals is a pretty smart move in my mind.

Someone mentioned on Twitter that the move reminded them of The Last Starfighter, where an alien spaceforce used a video game to find top talent to help defend the world. I think that’s spot on.

The hero plays a video game that’s actually a trainer

The military has been doing this for years as sort of an open secret, and they spend tons of money making the military and government appear in a positive light in Hollywood movies.

Some might think that’s gross, but I think the worst part about it is the fact that so few people notice—or would even care if they knew. It’s the same kind of thing here with this release. It smells exactly like public relations. But is that really a bad thing?

I wish they’d just come out and say it. Own the fact that it’s a bit of PR, and recruiting, and camaraderie all in one.

Despite the failings of the NSA in recent years, I don’t know many Americans who think we don’t need them. And to do their job well they need talent. And for that talent to perform they need to believe that they’re on the good side.

Or they’re Mr. Burns waiting to pounce—who knows…

I see the overture as a good thing. It’s them eating a piece of humble pie, and cautiously reaching out to the community with a gift. I hope we accept it, and I hope it makes the tenuous bond between us stronger.

Because like it or not, we need each other.

The last thing we need—with Russia and China owning us with impunity—is to be fighting amongst ourselves.

Subscribe for one coffee a month ($5) and get the Unsupervised Learning podcast and newsletter every week instead of just twice a month.

Source: DM
Why I Think the NSA is Releasing a Free Reverse Engineering Tool This Year at RSA