Thursday, 14 November


Icahn smell money! Corporate raider grabs $1.2bn of HP stock to push for Xerox merger [The Register]

Watch out, Carl's about

It was only a matter of time before Carl Icahn got involved in the developing story that is HP and Xerox's marriage. The IT industry's biggest, baddest corporate raider is using his $1.2bn stake in HP to push for nuptials.…


Magic Leap rattles money tin, assigns patents to a megabank, sues another ex-staffer... But fear not, all's fine [The Register]

Wait, wait, wait... there is good news: It has a Spotify app. What a winner

Analysis  Augmented reality hype-merchant Magic Leap has had to whip out its begging cap, sorry, sorry, its once-in-a-lifetime investment chest again for venture capitalists to top up with with millions of dollars.…


George Lucas Has Apparently Changed the Famous Greedo Scene In 1977's Star Wars Again, For Disney+ [Slashdot]

Freshly Exhumed shares a report from The Guardian: George Lucas, whose departure from all things Star Wars seems to have been greatly exaggerated -- appears to have yet again doctored the famous Greedo scene in 1977's Star Wars [prior to it being shown on the Disney+ streaming service]. The scene depicts the Mos Eisley cantina in which Harrison Ford's Han Solo is confronted by an alien bounty hunter and winds up shooting him dead in a brief flurry of blaster fire. It has been much discussed over the years, largely because Solo shot Greedo in cold blood in the original, "Han shot first" 1977 cut, while in later versions Lucas re-edited the footage to depict Greedo as the aggressor, with Han returning fire in self-defense. Many fans have speculated about what effect that subtle change had on Han's transformation in the original trilogy from cold-hearted hustler to hero of the resistance. Now Lucas has tinkered all over again, to further muddy the waters. As seen on new streaming service Disney+, the scene features Han and Greedo shooting at roughly the same moment -- to be fair, this is a change introduced several years back. But now, Greedo appears to utter the phrase "MacClunkey!" before succumbing to his wounds. Reports suggest Lucas made the changes some years ago, perhaps around the time he sold Lucasfilm to Disney for $4 billion, in 2012. Celebrities such as Stephen King and Patton Oswalt have speculated about what the re-edit means for the future of Star Wars, though nobody seems to have much of a clue.

Read more of this story at Slashdot.


Intel's Assembler Changes For JCC Erratum Are Not Hurting AMD [Phoronix]

When writing about the Intel Jump Conditional Code (JCC) Erratum and how Intel is working to mitigate the performance hit of the CPU microcode update with patches to the GNU Assembler, there was some concern expressed by readers that it might hurt AMD performance. That does not appear to be the case...


20% of UK businesses would rather axe their contractors than deal with IR35 – survey [The Register]

But firms will be forced to take a more measured response in time, says consultancy

As many as 20 per cent of UK businesses are axing contractors completely in order to ensure they are fully tax compliant ahead of IR35 changes next year, according to a survey.…


Weird flex but OK... Motorola's comeback is a $1,500 Razr flip-phone with folding 6.2" screen [The Register]

Twelve hundred quid for a Snapdragon 710 Android 9 gizmo. Stick a fork in this decade, we're done

Video  At a pseudo-rave slash launch party in Los Angeles on Wednesday night, Motorola revealed the 2019 Razr, an update on a flip phone that wowed people 15 years ago. Today, perhaps... not so much.…


That chill in the air isn't just autumn, it's Cisco's cooling finances: CEO warns of slipping sales [The Register]

Switchzilla gives gloomy outlook, tells of dark financial days for the coming fiscal year

Cisco is the latest tech biz to warn of a looming slowdown in spending as the network giant on Wednesday gave worse-than-expected guidance for the coming financial quarter.…


Motorola Resurrects the Razr As a Foldable Android Smartphone [Slashdot]

After teasing it last month, Motorola has officially announced the successor to the Motorola Razr. The "razr," as it is called, "keeps the same general form factor but replaces the T9 keypad and small LCD with a 6.2-inch foldable plastic OLED panel and Android 9 Pie," reports The Verge. "It'll cost $1,499 when it arrives in January 2020." From the report: The new Razr is a fundamentally different take on the foldable phones that we've seen so far: instead of turning a modern-sized phone into a smaller tablet, it turns a conventional-sized smartphone into something much smaller and more pocketable. [...] The core of the phone is, of course, the display. It's a 6.2-inch 21:9 plastic OLED panel that folds in half along the horizontal axis. Unfolded, it's not dramatically bigger than any other modern phone, and the extra height is something that the Android interface and apps adapt to far better than a tablet-size screen. The screen does have a notch on top for a speaker and camera and a curved edge on the bottom, which takes a bit of getting used to, but after a minute or two, you barely notice it. There's also a second, 2.7-inch glass-covered OLED display on the outside that Motorola calls the Quick View display. It can show notifications, music controls, and even a selfie camera mode to take advantage of the better main camera. Motorola is also working with Google to let apps seamlessly transition from the front display to the main one. There are some concerns about durability for the folding display, especially after Samsung's Galaxy Fold issues. But Motorola says that it has "full confidence in the durability of the Flex View display," claiming that its research shows that "it will last for the average lifespan of a smartphone." There's a proprietary coating to make the panel "scuff resistant," and it also has an internal nano-coating for splash resistance. (Don't take it swimming, though.) Motorola says that the entire display is made with a single cut, with the edges entirely enclosed by the stainless steel frame to prevent debris from getting in. Aside from the mid-range specs, like the Snapdragon 710 processor and "lackluster" 16-megapixel camera, seasoned reviewers appear to really like the nostalgic look and feel of the device. Did you own a Razr phone from the mid-2000s? How do you think the new model compares?

Read more of this story at Slashdot.

Wednesday, 13 November


NASA boffins tackle Nazi alien in space – with the help of Native American tribal elders [The Register]

Clickbait? We've heard of IT

NASA has given the Kuiper belt object nicknamed Ultima Thule the official name Arrokoth, which means sky in the Native American Powhatan-Algonquian language.…


NVIDIA 435.27.06 Vulkan Linux Driver Has Useful Display Improvements [Phoronix]

Released on Wednesday was the NVIDIA 435.27.06 Linux driver as their newest beta build focused on offering better Vulkan driver support...


Mesa 19.2.4 Released As Emergency Update After 19.2.3 Broke All OpenGL Drivers [Phoronix]

Mesa 19.2.4 was released on Wednesday as an "emergency release" after a bug was discovered that made last week's Mesa 19.2.3 version buggy for all OpenGL drivers...


Hologram-Like Device Animates Objects Using Ultrasound Waves [Slashdot]

An anonymous reader quotes a report from The Guardian: Researchers in Southampton have built a device that displays 3D animated objects that can talk and interact with onlookers. A demonstration of the display showed a butterfly flapping its wings, a countdown spelled out by numbers hanging in the air, and a rotating, multicolored planet Earth. Beyond interactive digital signs and animations, scientists want to use it to visualize and even feel data. While the images are similar, the device is not the sort of holographic projector that allowed a shimmering Princess Leia to enlist Obi-Wan Kenobi's help in Star Wars. Instead, it uses a 3D field of ultrasound waves to levitate a polystyrene bead and whip it around at high speed to trace shapes in the air. The 2mm-wide bead moves so fast, at speeds approaching 20mph, that it traces out the shape of an object in less than one-tenth of a second. At such a speed, the brain doesn't see the moving bead, only the completed shape it creates. The colors are added by LEDs built into the display that shine light on the bead as it zips around. Because the images are created in 3D space, they can be viewed from any angle. And by careful control of the ultrasonic field, the scientists can make objects speak, or add sound effects and musical accompaniments to the animated images. Further manipulation of the sound field enables users to interact with the objects and even feel them in their hands. "The images are created between two horizontal plates that are studded with small ultrasonic transducers," reports The Guardian. "These create an inaudible 3D sound field that contains a tiny pocket of low pressure air that traps the polystyrene bead. Move the pocket around, by tweaking the output of the transducers, and the bead moves with it." In the journal Nature, researchers describe how they've improved the display to produce sounds and tactile responses to people reaching out to the image.

Read more of this story at Slashdot.


GitHub Faces More Resignations In Light of ICE Contract [Slashdot]

TechCrunch reports that another employee, engineer Alice Goldfuss, has resigned from GitHub over the company's $200,000 contract with Immigration and Customs Enforcement (ICE). From the report: In a tweet, Goldfuss said GitHub has a number of problems to address and that "ICE is only the latest." Meanwhile, Vice reports at least five staffers quit today. These resignations come the same day as GitHub Universe, the company's big product conference. Ahead of the conference, Tech Workers Coalition protested the event, setting up a cage to represent where ICE detains children. Last month, GitHub staff engineer Sophie Haskins resigned, stating she was leaving because the company did not cancel its contract with ICE, The Los Angeles Times reported. Last month, GitHub employees penned an open letter urging the company to stop working with ICE. That came following GitHub's announcement of a $500,000 donation to nonprofit organizations in support of "immigrant communities targeted by the current administration." In that announcement, GitHub CEO Nat Friedman said ICE's purchase was made through one of GitHub's reseller partners and said the deal is not "financially material" for the company. Friedman also pointed out that ICE is responsible for more than immigration and detention facilities.

Read more of this story at Slashdot.


John Carmack Stepping Down As CTO of Oculus To Work On AI [Slashdot]

Oculus CTO John Carmack announced Wednesday that he is stepping down from the augmented-reality company to focus his time on artificial general intelligence. The Verge reports: Carmack will remain in a "consulting CTO" position at Oculus, where he will "still have a voice" in the development work at the company, he wrote. Recent comments from Carmack suggest he may have soured on VR. Carmack was a champion of phone-based VR for years at Oculus, but in October, he delivered a "eulogy" for Oculus' phone-based Gear VR. And in a video for receiving a lifetime achievement award this week at the VR Awards, he said that "I really haven't been satisfied with the pace of progress that we've been making" in VR.

Read more of this story at Slashdot.


Apple Is Finally Willing To Make Gadgets Thicker So They Work Better [Slashdot]

Apple has started to make its products thicker in an effort to give people what they want: functionality over form. This is a good thing. There are two recent examples: this year's iPhones and the new 16-inch MacBook Pro. Todd Haselton writes via CNBC: This is a theory, but it seems this may be that there are some design changes being made after the departure of Apple's former chief design officer Jony Ive. Ive was known for creating gorgeous products but, sometimes as we've seen with the older MacBook keyboard, perhaps at the cost of functionality. Form over function, as they say. [...] If you look back at the iPhone 8, for example, the phone measured just 7.3-mm thick, an example of Apple's seeming obsession with creating devices that were as thin as possible but often at the cost of battery life. But this year, Apple put a huge focus on battery life because it knows that's one of top things people want from their phones (along with great cameras). As a result of the larger battery, this year's iPhone 11 is slightly fatter at 8.3-mm thick. It's barely noticeable but shows that Apple knows people are willing to sacrifice on thinness for a phone that lasts all day. Then there's the 16-inch MacBook Pro that was announced on Wednesday. It's less than 1-mm thicker than the 15-inch MacBook Pro that it replaces, and it weighs 4.3 pounds instead of 4 pounds in the prior model. It's 2% larger than the 15-inch MacBook Pro, too. All of this helps Apple include what people want in a similar but slightly bigger form factor: a keyboard with keys that you can actually tap into and that works, instead of one that's practically flat with very little key travel. The flat so-called butterfly keyboard was prone to exposure to dust and debris, which could lead to keys not registering or repeating themselves and, ultimately, lots of typos. Apple also focused on battery life in its new laptop. It lasts an hour longer than last year's model and charges fully in just 2.5 hours. That's partly because Apple was able to increase the battery size, something that likely contributed to the larger and heavier form factor.

Read more of this story at Slashdot.


The NYPD Kept an Illegal Database of Juvenile Fingerprints For Years [Slashdot]

An anonymous reader quotes a report from The Intercept: For years, the New York Police Department illegally maintained a database containing the fingerprints of thousands of children charged as juvenile delinquents -- in direct violation of state law mandating that police destroy these records after turning them over to the state's Division of Criminal Justice Services. When lawyers representing some of those youths discovered the violation, the police department dragged its feet, at first denying but eventually admitting that it was retaining prints it was supposed to have destroyed. Since 2015, attorneys with the Legal Aid Society, which represents the majority of youths charged in New York City family courts, had been locked in a battle with the police department over retention of the fingerprint records of children under the age of 16. The NYPD did not answer questions from The Intercept about its handling of the records, but according to Legal Aid, the police department confirmed to the organization last week that the database had been destroyed. To date, the department has made no public admission of wrongdoing, nor has it notified the thousands of people it impacted, although it has changed its fingerprint retention practices following Legal Aid's probing. "The NYPD can confirm that the department destroys juvenile delinquent fingerprints after the prints have been transmitted to DCJS," a police spokesperson wrote in a statement to The Intercept. Still, the way the department handled the process -- resisting transparency and stalling even after being threatened with legal action -- raises concerns about how police handle a growing number of databases of personal information, including DNA and data obtained through facial recognition technology. As The Intercept has reported extensively, the NYPD also maintains a secretive and controversial "gang database," which labels thousands of unsuspecting New Yorkers -- almost all black or Latino youth -- as "gang members" based on a set of broad and arbitrary criteria. The fact that police were able to violate the law around juvenile fingerprints for years without consequence underscores the need for greater transparency and accountability, which critics say can only come from independent oversight of the department. It's unclear how long the NYPD was illegally retaining these fingerprints, but the report says the state has been using the Automated Fingerprint Identification System since 1989, "and laws protecting juvenile delinquent records have been in place since at least 1977." Legal Aid lawyers estimate that tens of thousands of juveniles could have had their fingerprints illegally retained by police.

Read more of this story at Slashdot.


If you've wanted to lazily merge code on GitHub from the pub, couch or beach, there's now a mobile app for that [The Register]

GitHub opens beta of handheld tools, unveils Arctic stunt, and other stuff

GitHub used the first day of its Universe developer conference to roll out a slew of new projects, including a dedicated mobile app.…


Cloudflare 深圳商务与技术分享会 [The Cloudflare Blog]

Cloudflare 深圳商务与技术分享会
Cloudflare 深圳商务与技术分享会


最近一年来,Cloudflare在产品发展和服务能力上都有很多让人兴奋的进展,包括:网络规模已覆盖超过194个城市,新推出的机器人管理Bot Management和清洗中心 Magic Transit等产品。Cloudflare亚太区和中国区员工怀着饱满的热情在深圳南山区的万豪酒店举办了Cloudflare 在中国内地第一个商务和技术分享会,并诚挚邀请了深圳和周边的客户和合作伙伴一起参与。这次分享会采用定向邀请的方式,受邀的嘉宾包括各行业的客户领导和专家,合作伙伴商务主管和技术大牛。这是一个非常独特的机会,各位嘉宾在轻松舒适的环境中与Cloudflare的商务代表和技术专家讨论产品技术,分享使用体验。



Cloudflare 深圳商务与技术分享会
Cloudflare 深圳商务与技术分享会

Xavier Cai,中国区总经理,负责Cloudflare在中国大陆地区的整体发展和业务增长。他有超过15年的行业和区域的销售管理经验,曾在F5 和HP/HPE等跨国公司担任高级业务主管。

Xavier携客户成功经理Sophie共同分享了Cloudflare 在中国的业务发展情况,特别介绍了本土团队的建立与发展。Cloudflare中国区协同亚太区和美国总部,组建了一支高效、专业的团队,为中国区的客户和合作伙伴提供中、英文双语7x24小时不间断的商务服务和技术支持,消除了大家对获取及时服务和支持以及语言障碍的顾虑。Xavier分享了跨国团队的优势所在:Cloudflare利用庞大的全球网络优势和最前沿的产品和服务能力,结合全球各行业经验与合作伙伴资源,高效率并且低成本地帮助中国本地客户解决业务难题、消除痛点。同时,Cloudflare也非常注重聆听来自中国本土客户宝贵意见和反馈,帮助优化产品功能,提升服务效率。

Workers 与 Access的敏捷开发技巧


孟鑫,是我们亚太区资深解决方案工程师,毕业于新加坡国立大学。曾任职于Merrill Lynch、Qualitics、Singtel等知名国际企业,帮助客户针对网络安全、优化及架构等方面提供解决方案。

他用丰富的现有客户解决方案的案例中抽取精华,分享了如何用Cloudflare Worker与Access进行快速开发,提供了最具代表性的案例供客户参考,解答了重点问题如:

  • 什么是无服务器架构 (Serverless)?
  • 什么是 Worker ?
  • 如何使用Worker进行开发的最佳实践 ?
Cloudflare 深圳商务与技术分享会
Cloudflare 深圳商务与技术分享会

Bot Management 与 Magic Transit


Cloudflare Bot Management 又称机器人管理。通过2000多万个的网站提供的流量数据,有效地侦测、管理与拦截自动机器人以及爬虫攻击,一键启用。

Cloudflare Bot Management可有效解决以下常见攻击:

  • 暴露破解 :通过已经被盗的其他网站的登录信息,暴力尝试登录,并以此盗取用户信息
  • 恶意爬虫: 恶意爬取并窃取网页内容
  • 广告虚假点击 :自动化地模拟广告链接的点击,并在联系页面中填写错误或者垃圾信息
  • 霸占库存 :假意购买商品霸占库存,使正常的客户流失,或者二手高价出售
  • 信用卡验证破解 :通过被盗信用卡号进行暴力验证并恶意消费

孟鑫分享了Bot Management 的强大功能。 Cloudflare 遍布全球的网络提供了巨大的数据库信息,使Bot Management 可通过机器学习模块(Machine Learning),行为学习模块(Behavioral Analysis)和验证白名单模块(Automatic Whitelist) 更加智能,在这个“数据为王”的信息安全时代里一马当先。

而Cloudflare Magit Transit 是Cloudflare今年推出另一个备受瞩目的的产品。拥有30+ Tbps DDoS 全球清洗容量使Cloudflare服务扩展至网络层。孟鑫重点介绍了Magit Transit 的工作方式;通过BGP广播将客户流量引导至Cloudflare的数据中心,所有导入流量会被自动清洗与保护,清洗流量会被导回客户数据中心。Anycast GRE tunnels技术帮助简化配置步骤。全球著名的Cloudflare DDoS防御技术将为客户的数据中心所用。

我们的分享将持续进行, 敬请期待 !


Cloudflare 深圳商务与技术分享会

中场茶歇期间,Cloudflare 团队与客户相聊甚欢。

Cloudflare 深圳商务与技术分享会



Cloudflare 深圳商务与技术分享会
Cloudflare 深圳分享会的参与同事: Sophie Qiu (组织者)| Xavier Cai | Xin Meng | George Sun | Vincent Liu | Bruce Li | Alex Thiang | Ellen Miao | Adam Luo | Xiaojin Liu


GitHub Places Open-Source Code In Arctic Cave For Safekeeping [Slashdot]

pacopico writes: GitHub's CEO Nat Friedman traveled to Svalbard in October to stash Linux, Android, and 6,000 other open-source projects in a permafrost-filled, abandoned coal mine. It's part of a project to safeguard the world's software from existential threats and also just to archive the code for posterity. As Friedman says, "If you told someone 20 years ago that in 2020, all of human civilization will depend on and run on open-source code written for free by volunteers in countries all around the world who don't know each other, and it'll just be downloaded and put into almost every product, I think people would say, 'That's crazy, that's never going to happen. Software is written by big, professional companies.' It's sort of a magical moment. Having a historical record of this will, I think, be valuable to future generations." GitHub plans to open several more vaults in other places around the world and to store any code that people want included.

Read more of this story at Slashdot.


Player three has entered Cray's supercomputing game: First AMD Epyc, now Fujitsu's Arm chips [The Register]

A64FX: Big in Japan, big in the US, UK at this rate

Cray has said it will build a family of supercomputers for government research labs and universities. The kicker? The exascale machines will be powered by Arm-compatible microprocessors.…


YouTube's New Kids' Content System Has Creators Scrambling [Slashdot]

As of Tuesday afternoon, YouTube is requiring creators to label any videos of theirs that may appeal to children. If they say a video is directed at kids, data collection will be blocked for all viewers, resulting in lower ad revenue and the loss of some of the platform's most popular features, including comments and end screens. It's a major change in how YouTube works, and has left some creators clueless as to whether they're subject to the new rules. The Verge reports: Reached by The Verge, Google confirmed that this new system was the result of a landmark $170 million settlement YouTube reached with the Federal Trade Commission in September for allegedly violating children's privacy. It's the largest fine ever collected under the Children's Online Privacy Protection Act (COPPA), which forbids collecting data from children under the age of 13 without explicit consent from their parents. In this case, the ruling means YouTube can't employ its powerful ad-targeting system on anyone who might be under the age of 13 -- a dire problem for a platform with so many young users. The new system is already sending creators reeling over what exactly is considered kids' content and what could happen if they unintentionally mislabel videos. Some of YouTube's most popular categories falls into a gray area for the policy, including gaming videos, family vlogging, and toy reviews. [...] In theory, YouTube has always been subject to COPPA, but those restrictions have taken on new urgency in the wake of the recent settlement with the FTC. Under the terms of the settlement, YouTube is required to "develop, implement, and maintain a system for Channel Owners to designate whether their Content on the YouTube Service is directed to Children." Under the system that YouTube rolled out on Tuesday, creators who strictly make children's content can also have their entire channel designated as directed at children. Once a video is labeled as kids' content, all personalized ads will be shut off, replaced with "contextualized" advertising based on the video itself. In addition to the removal of targeted ads, child-directed YouTube videos will also no longer include a comments section, click-through info cards, end screens, notification functions, and the community tab. "The consequences for not labeling a video as 'child-directed' could be even more severe," reports The Verge. "In its September order, the FTC made it clear that it could sue individual channel owners who abuse this new labeling system. Crucially, those lawsuits will fall entirely on channel owners, rather than on YouTube itself. Under the settlement, YouTube's responsibility is simply to maintain the system and provide ongoing data updates."

Read more of this story at Slashdot.


Stadia Launch Developer Says Game Makers Are Worried 'Google Is Just Going To Cancel It' [Slashdot]

An anonymous reader quotes a report from Ars Technica: Google has a long and well-documented history of launching new services only to shut them down a few months or years later. And with the launch of Stadia imminent, one launch game developer has acknowledged the prevalence of concerns about that history among her fellow developers while also downplaying their seriousness in light of Stadia's potential. "The biggest complaint most developers have with Stadia is the fear that Google is just going to cancel it," Gwen Frey, developer of Stadia launch puzzle game Kine, told in recently published comments. "Nobody ever says, 'Oh, it's not going to work,' or 'Streaming isn't the future.' Everyone accepts that streaming is pretty much inevitable. The biggest concern with Stadia is that it might not exist." While concerns about Stadia working correctly aren't quite as nonexistent as Frey said, early tests show the service works well enough in ideal circumstances. As for the service's continued existence, Frey thinks such concerns among other developers are "kind of silly." "Working in tech, you have to be willing to make bold moves and try things that could fail," Frey continued. "And yeah, Google's canceled a lot of projects. But I also have a Pixel in my pocket, I'm using Google Maps to get around. I only got here because my Google Calendar told me to get here by giving me a prompt in Gmail. It's not like Google cancels every fucking thing they make." "Nothing in life is certain, but we're committed to making Stadia a success," said Stadia Director of Product Andrew Doronichev in July. "Of course, it's OK to doubt my words. There's nothing I can say now to make you believe if you don't. But what we can do is to launch the service and continue investing in it for years to come."

Read more of this story at Slashdot.


Facebook Says Government Demands For User Data Are at a Record High [Slashdot]

Facebook's latest transparency report is out. The social media giant said the number of government demands for user data increased by 16% to 128,617 demands during the first-half of this year compared to the second-half of last year. From a report: That's the highest number of government demands its received in any reporting period since it published its first transparency report in 2013. The U.S. government led the way with the most number of requests -- 50,741 demands for user data resulting in some account or user data given to authorities in 88% of cases. Facebook said two-thirds of all of the U.S. government's requests came with a gag order, preventing the company from telling the user about the request for their data. But Facebook said it was able to release details of 11 so-called national security letters (NSLs) for the first time after their gag provisions were lifted during the period. National security letters can compel companies to turn over non-content data at the request of the FBI. These letters are not approved by a judge, and often come with a gag order preventing their disclosure. But since the Freedom Act passed in 2015, companies have been allowed to request the lifting of those gag orders.

Read more of this story at Slashdot.


More Than 10 Million Sign Up For Disney+ in First Day [Slashdot]

The Walt Disney Company said Wednesday that its new streaming service Disney+ had 10 million sign-ups since it launched Tuesday at midnight. From a report: Disney wouldn't release the number if the company didn't think it represented a major milestone. Disney told investors in the spring that it hopes to reach 60 million to 90 million subscribers by 2024. The number is also notable, considering the service launched with a few hiccups. Early reports on Tuesday suggested the technology for Disney+ began crashing on launch day. Analysts anticipated strong consumer interest prior to the launch of the new service. Polling suggests that consumers were interested in the service at launch because of the access to Disney's movie catalog, as well as its new show, "The Mandalorian."

Read more of this story at Slashdot.


Judge shoots down Trump admin's efforts to allow folks to post shoddy 3D printer gun blueprints online [The Register]

US government told it must give a reason to snub policy

A federal judge in the US state of Washington has struck down a settlement that would allow people to post blueprints and instructions to 3D-print guns, claiming it was unlawful.…


CodeWeavers Is Hiring Another Graphics Developer To Help With Wine D3D / Steam Play [Phoronix]

CodeWeavers is looking to hire another developer to work on Wine's graphics stack and in particular the WineD3D code while having an emphasis that it's part of Valve's Steam Play (Proton) efforts...


Health Websites Are Sharing Sensitive Medical Data with Google, Facebook, and Amazon [Slashdot]

Popular health websites are sharing private, personal medical data with big tech companies, according to an investigation by the Financial Times. From a report: The data, including medical diagnoses, symptoms, prescriptions, and menstrual and fertility information, are being sold to companies like Google, Amazon, Facebook, and Oracle and smaller data brokers and advertising technology firms, like Scorecard and OpenX. The FT analyzed 100 health websites, including WebMD, Healthline, health insurance group Bupa, and parenting site Babycentre, and found that 79% of them dropped cookies on visitors, allowing them to be tracked by third-party companies around the internet. This was done without consent, making the practice illegal under European Union regulations. By far the most common destination for the data was Google's advertising arm DoubleClick, which showed up in 78% of the sites the FT tested.

Read more of this story at Slashdot.


The Firefox + Chrome Web Browser Performance Impact From Intel's JCC Erratum Microcode Update [Phoronix]

With yesterday's overview and benchmarks of Intel's Jump Conditional Code Erratum one of the areas where the performance impact of the updated CPU microcode exceeding Intel's 0~4% guidance was on the web browser performance. Now with more time having passed, here are more web browser benchmarks on both Chrome and Firefox while comparing the new CPU microcode release for the JCC Erratum compared to the previous release. Simply moving to this new CPU microcode does represent a significant hit to the web browser performance.


Just Docker room talk: Container upstart's enterprise wing sold to Mirantis, CEO out, Swarm support faces ax [The Register]

Plans to continue with $35m to back Hub and Desktop. Yes, Kubernetes has truly won

Docker has handed the Enterprise portion of its containerization business to Kubernetes cloud outfit Mirantis in a surprise sell-off.…


The Curiosity Rover Detects Oxygen Behaving Strangely on Mars [Slashdot]

NASA's Curiosity rover sniffed out an unexpected seasonal variation to the oxygen on Mars, according to new research. From a report: Since it landed in Gale Crater in 2012, the Curiosity rover has been studying the Martian surface beneath its wheels to learn more about the planet's history. But Curiosity also stuck its nose in the air for a big sniff to understand the Martian atmosphere. So far, this sniffing has resulted in some findings that scientists are still trying to understand. Earlier this year, the rover's tunable laser spectrometer, called SAM, which stands for Sample Analysis at Mars, detected the largest amount of methane ever measured during its mission. SAM has also found that over time, oxygen behaves in a way that can't be explained by any chemical process scientists currently understand. SAM has had plenty of time -- about six years -- to sniff and analyze the atmospheric composition on Mars. The data revealed that at the surface, 95% of the atmosphere is carbon dioxide, followed by 2.6% molecular nitrogen, 1.9% argon, 0.16% oxygen and 0.06% carbon monoxide. Like Earth, Mars goes through its seasons; over the course of a year, the air pressure changes. This happens when the carbon dioxide gas freezes in winter at the poles, causing the air pressure to lower. It rises again in the spring and summer, redistributing across Mars as the carbon dioxide evaporates. In relation to the carbon monoxide, nitrogen and argon also follow similar dips and peaks. But oxygen didn't. Surprisingly, the oxygen actually rose by a peak increase of 30% in the spring and summer before dropping back to normal in the fall.

Read more of this story at Slashdot.


Boston Dynamics CEO on the Company's Top 3 Robots, AI, and Viral Videos [Slashdot]

In a rare interview, Boston Dynamics CEO Marc Raibert talked about the three robots the company is currently focused on (today -- Spot, tomorrow -- Handle, and the future -- Atlas), its current customers, potential applications, AI, simulation, and of course those viral videos. An excerpt from the interview: "Today," for Raibert, refers to a time period that extends over the course of the next year or so. Spot is the "today" robot because it's already shipping to early adopters. In fact, it's only been shipping for about six weeks. Boston Dynamics wants Spot to be a platform -- Raibert has many times referred to it as "the Android of robots." Spot, which weighs about 60 pounds, "is not an end-use application robot," said Raibert. Users can add hardware payloads, and they can add software that interacts with Spot through its API. In fact, Raibert's main purpose in attending Web Summit was to inspire attendees to develop hardware and software for Spot. Boston Dynamics has an arm, spectrum radio, cameras, and lidars for Spot, but other companies are developing their own sensors. The "Spot" we're talking about is technically the SpotMini. It was renamed when it succeeded its older, bigger brother Spot. "The legacy Spot was a research project. We're really not doing anything with it at the moment. We just call it 'Spot' now; it's the product." Spot can go up and down stairs by using obstacle detection cameras to see railings and steps. It also has an autonomous navigation system that lets it traverse a terrain. While Spot can be steered by a human, the computers onboard regulate the legs and balance. Spot travels at about 3 miles per hour, which is about human walking speed. It has cameras on its front, back, and sides that help it navigate, travel autonomously, and move omnidirectionally. It has different gaits (slow, walking, running, and even show-off), can turn in place, and has a "chicken head" mode. That last one means it can decouple the motion of its hand from its body, similar to how many animals can stabilize one part while the rest of the body moves.

Read more of this story at Slashdot.


LibreOffice 6.4 Branched - Beta Release Underway With QR Code Generator, Threading Improvements [Phoronix]

As of this morning LibreOffice 6.4 was branched from master and the beta release tagged with those LO 6.4 Beta binaries expected out shortly...


TPM-FAIL Vulnerabilities Impact TPM Chips In Desktops, Laptops, Servers [Slashdot]

An anonymous reader writes: A team of academics has disclosed today two vulnerabilities known collectively as TPM-FAIL that could allow an attacker to retrieve cryptographic keys stored inside TPMs. The first vulnerability is CVE-2019-11090 and impacts Intel's Platform Trust Technology (PTT). Intel PTT is Intel's fTPM software-based TPM solution and is widely used on servers, desktops, and laptops, being supported on all Intel CPUs released since 2013, starting with the Haswell generation. The second is CVE-2019-16863 and impacts the ST33 TPM chip made by STMicroelectronics. This chip is incredibly popular and is used on a wide array of devices ranging from networking equipment to cloud servers, being one of the few chips that received a CommonCriteria (CC) EAL 4+ classification — which implies it comes with built-in protection against side-channel attacks like the ones discovered by the research team. Unlike most TPM attacks, these ones were deemed practical. A local adversary can recover the ECDSA key from Intel fTPM in 4-20 minutes depending on the access level. We even show that these attacks can be performed remotely on fast networks, by recovering the authentication key of a virtual private network (VPN) server in 5 hours.

Read more of this story at Slashdot.


Silicon Valley's Singularity University Is Cutting Staff, CEO Exits [Slashdot]

Singularity University, a Silicon Valley institute offering education on futurism, is reckoning with its own uncertain future. The chief executive officer is stepping down, and the organization plans to eliminate staff. From a report: The changes were outlined in an email Tuesday reviewed by Bloomberg that was sent to faculty by Erik Anderson, the executive chairman. They mark an extended decline for the company, which has in recent years lost an annual grant from Google and faced allegations of sexual assault, embezzlement and discrimination. Rob Nail, who ran Singularity for the last eight years, is leaving to pursue new career opportunities, Anderson wrote in the email. Singularity is conducting a CEO search, said a spokesman. The announcement of job cuts was made in line with U.S. labor law, which requires 60-day notice for companies with more than 100 employees, the spokesman said. Singularity declined to specify how many jobs would be affected, but a person familiar with the matter put the total at about 60. This person said many of those workers were informed of the news while attending a Singularity summit in Athens that ended Tuesday. Singularity, which takes its name from the notion that humans will someday merge with machines, was introduced in 2009 during a TED Talk by futurist Ray Kurzweil. The group operates for profit but with a mandate for social responsibility. Many alumni of its programs credit the organization with teaching them about cutting-edge concepts and helping them think more expansively.

Read more of this story at Slashdot.


Complete with keyboard and actual, literal, 'physical' escape key: Apple emits new 16" $2.4k+ MacBook Pro [The Register]

Make a notebook, fanbois

A 16-inch MacBook Pro – with a freshly designed keyboard that isn't trashed by dust and includes a "physical" escape key – has landed, but it won't come cheap, costing the same as a modest family holiday or a second hand car.…


Radeon Pro Software for Enterprise 19.Q4 for Linux Released [Phoronix]

AMD on Tuesday released their Radeon Pro Software for Enterprise 19.Q4 for Linux package as their newest quarterly driver release intended for their professional graphics card offerings...


Mozilla, Intel, and More Form the Bytecode Alliance To Take WebAssembly Beyond Browsers [Slashdot]

slack_justyb writes: Mozilla has been heavily invested in WebAssembly with Firefox, and today, the organization teamed up with a few others to form the new Bytecode Alliance, which aims to create "new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI)." Mozilla has teamed up with Intel, Red Hat, and Fastly to found the alliance, but more members are likely to join over time. The goal of the Bytecode Alliance is to create a new runtime environment and language toolchains which are secure, efficient, and modular, while also being available on as many platforms and devices as possible. The technologies being developed by the Bytecode Alliance are based on WebAssembly and WASI, which have been seen as a potential replacement for JavaScript due to more efficient code compiling, and the expanded capabilities of being able to port C and C++ code to the web. To kick things off, the founding members have already contributed a number of open-source technologies to the Bytecode Alliance, including Wasmtime, a lightweight WebAssembly runtime; Lucet, an ahead-of-time compiler; WebAssembly Micro Runtime; and Cranelift.

Read more of this story at Slashdot.


They terrrk err jerrrbs! Vodafone replaces 2,600 roles with '600 bots' in bid to shrink €48bn debt [The Register]

It's happening!

Vodafone has replaced 2,600 roles with "600 bots" as part of a "long-lasting structural opportunity to reduce cost", the company revealed in its half-year results earnings call.…


Apple's Phil Schiller Takes Shots at Chromebooks, Says They're 'Not Going To Succeed' [Slashdot]

In an interview about the 16-inch MacBook Pro, Apple senior vice president Phil Schiller made a direct attack on Chromebooks. When asked about the growth of Chrome OS in the education sector, Schiller attributes the success of Chromebooks to their being "cheap." He said: Kids who are really into learning and want to learn will have better success. It's not hard to understand why kids aren't engaged in a classroom without applying technology in a way that inspires them. You need to have these cutting-edge learning tools to help kids really achieve their best results. Yet Chromebooks don't do that. Chromebooks have gotten to the classroom because, frankly, they're cheap testing tools for required testing. If all you want to do is test kids, well, maybe a cheap notebook will do that. But they're not going to succeed.

Read more of this story at Slashdot.


Khronos Next Pursuing An Analytic Rendering API [Phoronix]

The Khronos Group has been expanding into a lot of new areas in recent times from OpenXR to 3D Commerce to NNEF and now forming an exploratory group for creating an analytic rendering API...


Transcription Platform Rev Slashes Minimum Pay for Workers [Slashdot]

Rev, one of biggest names in transcription -- and one of the cheapest services of its kind -- opted to alter its pay structure with little warning for thousands of contractors on its platform, some of whom are furious at what they expect will be smaller paychecks from here on out. From a report: Launched in 2010, Rev made a name for itself by charging customers who wanted transcriptions of interviews, videos, podcasts, or whatever else the bargain-basement price of $1 per minute of audio. That's attracted some notable clients, including heavyweight podcast This American Life, according to the company. According to one whistleblower, a little less than half of that buck went to the contractor, while about 50 to 55 cents on the dollar lined Rev's pockets. But in an effort to "more fairly compensate Revvers for the effort spent on files," Rev announced on an internal message board on Wednesday that its job pricing model would change -- with a new minimum of 30 cents per minute (cpm) going into effect last Friday. "There was an internal forum post made two days prior, but not everybody checks the forums," one Revver who wished to remain anonymous for fear of retaliation, told Gizmodo. "A lot of people found out when they logged on on Friday. People are still showing up in the forums asking what's going on!"

Read more of this story at Slashdot.


Saturday Morning Breakfast Cereal - Nerd Jokes [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

Everyone I showed this to told me it wasn't funny, BUT MY HEART SAID YES.

Today's News:


TalkTalk keeps results under wraps citing 'advanced negotiations' over FibreNation biz [The Register]

Has it found an investor in £1.5bn venture to build 3 million FTTP connections?

TalkTalk has today delayed its financial results due to "advanced negotiations with interested parties regarding its FibreNation business".…


Next in Google's Quest for Consumer Dominance -- Banking [Slashdot]

Google will soon offer checking accounts to consumers, becoming the latest Silicon Valley heavyweight to push into finance. The Wall Street Journal: The project, code-named Cache, is expected to launch next year with accounts run by Citigroup and a credit union at Stanford University, a tiny lender in Google's backyard. Big tech companies see financial services as a way to get closer to users and glean valuable data. Apple introduced a credit card this summer. has talked to banks about offering checking accounts. Facebook is working on a digital currency it hopes will upend global payments. Their ambitions could challenge incumbent financial-services firms, which fear losing their primacy and customers. They are also likely to stoke a reaction in Washington, where regulators are already investigating whether large technology companies have too much clout. The tie-ups between banking and technology have sometimes been fraught. Apple irked its credit-card partner, Goldman Sachs Group, by running ads that said the card was "designed by Apple, not a bank." Major financial companies dropped out of Facebook's crypto project after a regulatory backlash. Google's approach seems designed to make allies, rather than enemies, in both camps. The financial institutions' brands, not Google's, will be front-and-center on the accounts, an executive told The Wall Street Journal. And Google will leave the financial plumbing and compliance to the banks -- activities it couldn't do without a license anyway.

Read more of this story at Slashdot.


Apple Unveils New 16-inch MacBook Pro With Improved Keyboard, Starting at $2,400 [Slashdot]

Apple today launched a new 16-inch MacBook Pro. The starting price of $2,399 is the same price as the previous 15-inch MacBook Pro, which this one replaces. It has new processors, better speakers, a larger screen, and (finally) a better keyboard. The base model is powered by a 2.6GHz 6-core 9th gen Intel Core i7 processor (Turbo Boost up to 4.5 GHz) coupled with AMD Radeon Pro 5300M GPU with 4GB of GDDR6 memory, 16GB of 2666MHz DDR4 RAM, and 512GB PCIe-based onboard SSD. John Gruber, writing about the keyboard: We got it all: a return of scissor key mechanisms in lieu of butterfly switches, a return of the inverted-T arrow key arrangement, and a hardware Escape key. Apple stated explicitly that their inspiration for this keyboard is the Magic Keyboard that ships with iMacs. At a glance, it looks very similar to the butterfly-switch keyboards on the previous 15-inch MacBook Pros. But don't let that fool you -- it feels completely different. There's a full 1mm of key travel; the butterfly keyboards only have 0.5mm. This is a very good compromise on key travel, balancing the superior feel and accuracy of more travel with the goal of keeping the overall device thin. (The new 16-inch MacBook Pro is, in fact, a little thicker than the previous 15-inch models overall.) Calling it the "Magic Keyboard" threads the impossible marketing needle they needed to thread: it concedes everything while confessing nothing. Apple has always had a great keyboard that could fit in a MacBook -- it just hasn't been in a MacBook the last three years. There's also more space between keys -- about 0.5mm. This difference is much more noticeable by feel than by sight. Making it easier to feel the gaps between keys really does make a difference. Like the 15-inch MacBook Pro, all 16-inch models come with the Touch Bar. But even there, there's a slight improvement: it's been nudged further above the top row of keys, to help avoid accidental touches. No haptic feedback or any other functional changes to the Touch Bar, though.

Read more of this story at Slashdot.


Thanks, Brexit. Tesla boss Elon Musk reveals Berlin as location for Euro Gigafactory [The Register]

Was UK even really in the running?

'Leccy car baron and space botherer Elon Musk has unveiled a surprising pick of Berlin for the company's European "Gigafactory 4", quickly following up by blabbing to car mag Auto Express that "Brexit had made it too risky to put a Gigafactory in the UK."…


Phoronix Test Suite 9.2 Milestone 2 Released [Phoronix]

The second development release of Phoronix Test Suite 9.2-Hurdal is now available for open-source, cross-platform and fully-automated benchmarking...


I've had it with these motherflipping eggs on this motherflipping train [The Register]

Woman fined £1,500 for tirade over commuter's weird brekkie

Eating on the train is no yolk. One woman felt so strongly about it, she's now nursing a £1,500 fine after eggsploding with rage at a fellow commuter for gobbling a hard-boiled pre-chicken on the service from Chelmsford to London Liverpool Street.…


Dutch Court Orders Facebook To Ban Celebrity Crypto Scam Ads [Slashdot]

An anonymous reader quotes a report from TechCrunch: A Dutch court has ruled that Facebook can be required to use filter technologies to identify and preemptively take down fake ads linked to crypto currency scams that carry the image of a media personality, John de Mol, and other well known celebrities. The Dutch celerity filed a lawsuit against Facebook in April over the misappropriation of his and other celebrities' likeness to shill Bitcoin scams via fake ads run on its platform. In an immediately enforceable preliminary judgement today the court has ordered Facebook to remove all offending ads within five days, and provide data on the accounts running them within a week. Per the judgement, victims of the crypto scams had reported a total of ~$1.8M in damages to the Dutch government at the time of the court summons. It's not yet clear whether the company will appeal but in the wake of the ruling Facebook has said it will bring the scam ads report button to the Dutch market early next month.

Read more of this story at Slashdot.


The Linux Kernel Disabling HPET For Intel Coffee Lake [Phoronix]

Another Intel change being sent off for Linux 5.4 and to be back-ported to current stable series is disabling of HPET for Coffee Lake systems...


Redis releases automatic cluster recovery for Kubernetes and RedisInsight GUI tool [The Register]

Also: what's in Redis 6 ... and how to compete with free Redis on public cloud

Interview  "Almost every one of our on-prem customers is shifting to K8s," Redis Labs CTO and co-founder Yiftach Shoolman tells The Register.…


UK Info Commish quietly urged court to swat away 100k Morrisons data breach sueball [The Register]

Supermarket says it's innocent and we don't need more than that, ICO told judges

The UK's Information Commissioner urged the Court of Appeal to side with Morrisons in the supermarket’s battle to avoid liability for the theft and leaking of nearly 100,000 employees’ payroll details – despite not having read the employees’ legal arguments.…


Fancy renting your developer environment? Visual Studio goes online [The Register]

Or you could try Gitpod...

Microsoft is offering cloud-hosted developer environments for those using Visual Studio Code or, in private preview, Visual Studio.…


Dell Unveils Subscription Model To Counter Amazon, Microsoft [Slashdot]

Dell is planning to offer business clients a subscription model for products like servers and personal computers, "seeking to counter the lure of cloud services from Amazon and Microsoft," reports Bloomberg. From the report: Dell and its hardware peers have been under pressure to offer corporate clients the flexibility and simplicity of infrastructure cloud services. Public cloud titans such as Amazon Web Services and Microsoft Azure have cut demand for data-center hardware as more businesses look to rent computing power rather than invest in their own server farms. Rival Hewlett Packard Enterprise said in June that it would move to a subscription model by 2022. Research firm Gartner predicts 15% of data-center hardware deals will include pay-per-use pricing in 2022, up from 1% in 2019, Dell said. Dell is making it easier for clients to upgrade their hardware since they don't have to spend a large amount of capital expenditures upfront, but can pay a smaller amount each month that counts toward a company's operating expenditures. For the consumption programs, customers pay for the amount of storage or computing power they use. Companies can also hire Dell to completely manage their hardware infrastructure for them. While Dell's overall sales climbed 2% in the quarter that ended Aug. 2, demand for its servers and networking gear dropped 12% in a reversal from last year, when there was unprecedented customer interest in the products. Dell still expects the vast majority of customers to pay upfront for products in the next three to five years, Grocott said.

Read more of this story at Slashdot.


Londoner accused of accessing National Lottery users' accounts [The Register]

Case to be heard in full next year

A man will appear at Crown court in December to answer charges that he used hacking program Sentry MBA to access and take money from online UK National Lottery gambling accounts.…


AMD GCN OpenMP/OpenACC Offloading Patches For The GCC 10 Compiler [Phoronix]

Over the past year Code Sourcery / Mentor Graphics has been working extensively on the new AMD Radeon "GCN" back-end for the GCC code compiler. With the code that is found in GCC 9 and up to now in GCC 10 hasn't supported OpenMP/OpenACC parallel programming interfaces but that could soon change with patches under review...


Four go wild for wasm: Corporate quartet come together to build safe WebAssembly sandbox [The Register]

Chipzilla, Mozilla, Fastly, and IBM's red-hatted stepchild plot browser-breakout

On Tuesday Fastly, Intel, Mozilla, and Red Hat teamed up to form the Bytecode Alliance, an industry group intent on making WebAssembly work more consistently and securely outside of web browsers.…


Edit images on Fedora easily with GIMP [Fedora Magazine]

GIMP (short for GNU Image Manipulation Program) is free and open-source image manipulation software. With many capabilities ranging from simple image editing to complex filters, scripting and even animation, it is a good alternative to popular commercial options.

Read on to learn how to install and use GIMP on Fedora. This article covers basic daily image editing.

Installing GIMP

GIMP is available in the official Fedora repository. To install it run:

sudo dnf install gimp

Single window mode

Once you open the application, it shows you the dark theme window with toolbox and the main editing area. Note that it has two window modes that you can switch between by selecting Windows -> Single Window Mode. By checking this option all components of the UI are displayed in a single window. Otherwise, they will be separate.

Loading an image

Fedora 30 Background

To load an image, go to File -> Open and choose your file and choose your image file.

Resizing an image

To resize the image, you have the option to resize based on a couple of parameters, including pixel and percentage — the two parameters which are often handy in editing images.

Let’s say we need to scale down the Fedora 30 background image to 75% of its current size. To do that, select Image -> Scale and then on the scale dialog, select percentage in the unit drop down. Next, enter 75 as width or height and press the Tab key. By default, the other dimension will automatically resize in correspondence with the changed dimension to preserve aspect ratio. For now, leave other options unchanged and press Scale.

Scale Dialog In GIMP

The image scales to 0.75 percent of its original size.

Rotating images

Rotating is a transform operation, so you find it under Image -> Transform from the main menu, where there are options to rotate the image by 90 or 180 degrees. There are also options for flipping the image vertically or horizontally under the mentioned option.

Let’s say we need to rotate the image 90 degrees. After applying a 90-degree clockwise rotation and horizontal flip, our image will look like this:

Transforming an image with GIMP

Adding text

Adding text is very easy. Just select the A icon from the toolbox, and click on a point on your image where you want to add the text. If the toolbox is not visible, open it from Windows->New Toolbox.

As you edit the text, you might notice that the text dialog has font customization options including font family, font size, etc.

Add Text To Images
Adding text to image in GIMP

Saving and exporting

You can save your edit as as a GIMP project with the xcf extension from File -> Save or by pressing Ctrl+S. Or you can export your image in formats such as PNG or JPEG. To export, go to File -> Export As or hit Ctrl+Shift+E and you will be presented with a dialog where you can select the output image and name.


GNU Assembler Patches Sent Out For Optimizing The Intel Jump Conditional Code Erratum [Phoronix]

Now that Intel lifted its embargo on the "Jump Conditional Code" erratum affecting Skylake through Cascade Lake processors, while Intel's own Clear Linux was first to carry these patches they have now been sent out on the Binutils mailing list for trying to get the JCC optimization patches into the upstream Binutils/GAS code-base...


Astroboffins baffled as Curiosity rover takes larger gasps of oxygen in Martian summers [The Register]

It might be organic life, but more likely chemistry says NASA

A new Martian mystery has left scientists baffled. The oxygen in the planet’s atmosphere seems to rise every spring and summer and fall during autumn and winter, and scientists have no idea why.…


Are We Living In a Blade Runner World? [Slashdot]

Now that we have arrived in Blade Runner's November 2019 "future," the BBC asks what the 37-year-old film got right. Slashdot reader dryriver shares the report: [B]eyond particular components, Blade Runner arguably gets something much more fundamental right, which is the world's socio-political outlook in 2019 -- and that isn't particularly welcome, according to Michi Trota, who is a media critic and the non-fiction editor of the science-fiction periodical, Uncanny Magazine. "It's disappointing, to say the least, that what Blade Runner "predicted" accurately is a dystopian landscape shaped by corporate influence and interests, mass industrialization's detrimental effect on the environment, the police state, and the whims of the rich and powerful resulting in chaos and violence, suffered by the socially marginalized." [...] As for the devastating effects of pollution and climate change evident in Blade Runner, as well as its 2017 sequel Blade Runner 2049, "the environmental collapse the film so vividly depicts is not too far off from where we are today," says science-fiction writer and software developer Matthew Kressel, pointing to the infamous 2013 picture of the Beijing smog that looks like a cut frame from the film. "And we're currently undergoing the greatest mass extinction since the dinosaurs died out 65 million years ago. In addition, the film's depiction of haves and have-nots, those who are able to live comfortable lives, while the rest live in squalor, is remarkably parallel to the immense disparity in wealth between the world's richest and poorest today. In that sense, the film is quite accurate." [...] And it can also provide a warning for us to mend our ways. Nobody, surely, would want to live in the November 2019 depicted by Blade Runner, would they? Don't be too sure, says Kressel. "In a way, Blade Runner can be thought of as the ultimate cautionary tale," he says. "Has there ever been a vision so totally bleak, one that shows how environmental degradation, dehumanization and personal estrangement are so harmful to the future of the world? "And yet, if anything, Blade Runner just shows the failure of the premise that cautionary tales actually work. Instead, we have fetishized Blade Runner's dystopian vision. Look at most art depicting the future across literature, film, visual art, and in almost all of them you will find echoes of Blade Runner's bleak dystopia. "Blade Runner made dystopias 'cool,' and so here we are, careening toward environmental collapse one burned hectare of rainforest at a time. If anything, I think we should be looking at why we failed to heed its warning."

Read more of this story at Slashdot.

Tuesday, 12 November


Replay on-demand online: Can you boost productivity with infrastructure as code? [The Register]

Set your developers free to innovate

Webcast  Skilled developers are a valuable asset – so how do you make the most of their time as constant requests and projects compete for their attention?…


VirtualBox SF Driver Ejected From The Linux 5.4 Kernel [Phoronix]

Merged to the mainline Linux kernel last week was a driver providing VirtualBox guest shared folder support with the driver up to now being out-of-tree but important for sharing files between the host and guest VM(s). While the driver was part of Linux 5.4-rc7, Linus Torvalds decided to delete this driver on Tuesday...


Russian bloke charged in US with running $20 million stolen card-as-a-service online souk [The Register]

Prosecutors say 29 year-old was mastermind of prolific 'Cardplanet' operation

A Russian man was detained at Dulles airport in Washington DC on Monday and charged with running a stolen card trading ring that was responsible for $20m worth of fraud.…


Physics Experiment With Ultrafast Laser Pulses Produces a Previously Unseen Phase of Matter [Slashdot]

An anonymous reader quotes a report from Phys.Org: Adding energy to any material, such as by heating it, almost always makes its structure less orderly. Ice, for example, with its crystalline structure, melts to become liquid water, with no order at all. But in new experiments by physicists at MIT and elsewhere, the opposite happens: When a pattern called a charge density wave in a certain material is hit with a fast laser pulse, a whole new charge density wave is created -- a highly ordered state, instead of the expected disorder. The surprising finding could help to reveal unseen properties in materials of all kinds. The experiments made use of a material called lanthanum tritelluride, which naturally forms itself into a layered structure. In this material, a wavelike pattern of electrons in high- and low-density regions forms spontaneously but is confined to a single direction within the material. But when hit with an ultrafast burst of laser light -- less than a picosecond long, or under one trillionth of a second -- that pattern, called a charge density wave or CDW, is obliterated, and a new CDW, at right angles to the original, pops into existence. This new, perpendicular CDW is something that has never been observed before in this material. It exists for only a flash, disappearing within a few more picoseconds. As it disappears, the original one comes back into view, suggesting that its presence had been somehow suppressed by the new one. The study has been published in the journal Nature Physics.

Read more of this story at Slashdot.


Amazon's Heavy Recruitment of Chinese Sellers Puts Consumers At Risk [Slashdot]

A Wall Street Journal investigation found that Amazon's China business "aggressively recruited Chinese manufacturers and merchants to sell to consumers outside the country. And these sellers, in turn, represent a high proportion of problem listings found on the site." From the report: The Journal earlier this year uncovered 10,870 items for sale between May and August that have been declared unsafe by federal agencies, are deceptively labeled, lacked federally-required warnings, or are banned by federal regulators. Amazon said it investigated the items, and some listings were taken down after the Journalâ(TM)s reporting. Of 1,934 sellers whose addresses could be determined, 54% were based in China, according to a Journal analysis of data from research firm Marketplace Pulse. Amazonâ(TM)s China recruiting is one reason why its platform increasingly resembles an unruly online flea market. A new product listing is uploaded to Amazon from China every 1/50th of a second, according to slides its officials showed a December conference in the industrial port city of Ningbo. Chinese factories are squeezing profit margins for middlemen who sell on Amazonâ(TM)s third-party platform. Some U.S. sellers fear the next step will be to cut them out entirely. In response to this article, an Amazon spokesman said, "Bad actors make up a tiny fraction of activity in our store and, like honest sellers, can come from every corner of the world. Regardless of where they are based, we work hard to stop bad actors before they can impact the shopping or selling experience in our store."

Read more of this story at Slashdot.


UCLA Now Has the First Zero-Emission, All-Electric Mobile Surgical Instrument Lab [Slashdot]

UCLA's new mobile surgical lab is a zero-emission, all-electric vehicle that will move back and forth between two UCLA campuses, collecting, sterilizing and repairing surgical instruments for the medical staff there. TechCrunch reports: Why is that even needed? The usual process is sending out surgical instruments for this kind of service by a third-party, and it's handled in a dedicated facility at a significant annual cost. UCLA Health Center estimates that it can save as much as $750,000 per year using the EV lab from Winnebago instead. The traveling lab can operate for around eight hours, including round-trips between the two hospital campuses, or for a total distance traveled of between 85 and 125 miles on a single charge of its battery, depending on usage. It also offers "the same level of performance, productivity and compliance" as a lab in a fixed-location building, according to Winnebago.

Read more of this story at Slashdot.


Section 230 supporters turn on it, its critics rely on it. Up is down, black is white in the crazy world of US law [The Register]

Meanwhile Facebook appears to have shot itself in the foot

Up is down and down is up when it comes to one of the most important, and now controversial, US legal protections for internet companies.…


Unusual New 'PureLocker' Ransomware Is Going After Servers [Slashdot]

Researchers at Intezer and IBM X-Force have detected an unconventional form of ransomware that's being deployed in targeted attacks against enterprise servers. They're calling it PureLocker because it's written in the PureBasic programming language. ZDNet reports: It's unusual for ransomware to be written in PureBasic, but it provides benefits to attackers because sometimes security vendors struggle to generate reliable detection signatures for malicious software written in this language. PureBasic is also transferable between Windows, Linux, and OS-X, meaning attackers can more easily target different platforms. "Targeting servers means the attackers are trying to hit their victims where it really hurts, especially databases which store the most critical information of the organization," Michael Kajiloti, security researcher at Intezer told ZDNet. There's currently no figures on the number PureLocker victims, but Intezer and IBM X-Force have confirmed the ransomware campaign is active with the ransomware being offered to attackers 'as-a-service.' However, it's also believed than rather than being offered to anyone who wants it, the service is offered as a bespoke tool, only available to cyber criminal operations which can afford to pay a significant sum in the first place. The source code of PureLocker ransomware offers clues to its exclusive nature, as it contains strings from the 'more_eggs' backdoor malware. This malware is sold on the dark web by what researchers describe as a 'veteran' provider of malicious services. These tools have been used by some of the most prolific cyber criminal groups operating today, including Cobalt Gang and FIN6 -- and the ransomware shares code with previous campaigns by these hacking gangs. It indicates the PureLocker is designed for criminals who know what they're doing and know how to hit a large organization where it hurts.

Read more of this story at Slashdot.


A Fired Kickstarter Organizer Is Trying To Unionize Tech Workers Using Kickstarter [Slashdot]

An anonymous reader quotes a report from Motherboard: In early September, the crowdfunding platform Kickstarter fired two union organizers in 8 days. One of them was Clarissa Redwine, who considered her termination to be a blatant act of retaliation for organizing what could become the first union at a major tech company in the United States. Although Redwine lost her job, she has not given up her vision. Today, she launched "Solidarity Onboarding," a new project designed to help workers unionize the tech industry -- using her former employer's platform. A collaboration between current and former organizers at WeWork, Google, Facebook, and other tech companies and coalitions, the project consists of an onboarding kit (booklet, pin, pencil, sticker) for tech workers interested in unionizing. "This kit is passed between coworkers as an act of solidarity and a signal that there is room to organize at your company," the project states. "Imagine the mirror image of a company's onboarding kit but for the tech labor movement," Redwine told Motherboard. "The focal point of this onboarding kit is a booklet of anti-worker statements. It's a collection of common talking point companies use to dissuade employees from taking collective action. Think of it as a union-busting artifact passed across companies from worker to worker." Within four hours of the project's launch, Redwine raised over 3 times her goal of $1000. The kit's booklet includes a collection of real anti-union quotes from tech CEOs -- including one from an email Kickstarter CEO Aziz Hasan sent to his employees in September, in response to the firings of Redwine and another union organizer: "The union framework is inherently adversarial. That dynamic doesn't reflect who we are as a company, how we interact, how we make decisions, or where we need to go." Another page includes a statement from an Amazon anti-union training video: "Our business model is built upon speed, innovation, and customer obsession -- things that are generally not associated with a union. When we lose sight of those critical focus areas we jeopardize everyone's job security: yours, mine, and the associates." "Clarissa's creative project is, of course, welcome on our platform," a spokesperson for Kickstarter said. "Kickstarter is a place where creators can share their ideas with the world and find people who want to support those ideas. We also welcome the continued dialogue among our staff members about the idea of a union at Kickstarter. We unequivocally support our staff's right to decide the unionization question for themselves."

Read more of this story at Slashdot.


The Gaming Performance Impact From The Intel JCC Erratum Microcode Update [Phoronix]

This morning I provided a lengthy look at the performance impact of Intel's JCC Erratum around the CPU microcode update issued for Skylake through Cascade Lake for mitigating potentially unpredictable behavior when jump instructions cross cache lines. Of the many benchmarks shared this morning in that overview, there wasn't time for any gaming tests prior to publishing. Now with more time passed, here is an initial look at how the Linux gaming performance is impacted by the newly-released Intel CPU microcode for this Jump Conditional Code issue.


IBM's 200,000 Macs Have Made a Happier and More Productive Workforce, Study Finds [Slashdot]

sbinning shares a report from AppleInsider: IBM has published its latest study focusing on the benefits of Apple products in enterprise, and has found that a fleet of over 200,000 Macs leads to far lower support costs, smaller numbers of support staff, and happier employees versus a Windows deployment. In the study presented on Tuesday, IBM says that employees that used Mac machines were 22 percent more likely to exceed expectations in performance reviews compared to Windows users. Mac-using employees generating sales deals have 16% larger proceeds as well. Turning to employee satisfaction, the first-of-its-kind study shows that Mac users were 17 percent less likely to leave IBM compared to their Windows counterparts. Mac users also were happier with the software available, with 5 percent asking for additional software compared to 11 percent of Windows users. A team of seven engineers is needed to maintain 200,000 Macs whereas a team of 20 is needed for that number of Windows PCs. During setup, the migration process was simple for 98 percent of Mac users versus only 86 percent of those moving from Windows 7 to Windows 10. Windows users were also five times as likely to need on-site support.

Read more of this story at Slashdot.


Shock! US border cops need 'reasonable suspicion' of a crime before searching your phone, laptop [The Register]

Massachusetts judge reminds America of that little thing called the Fourth Amendment

The seizure and search of phones and laptops at the US border is unconstitutional, a judge said Tuesday in a landmark ruling.…


Tesla's European Gigafactory Will Be Built In Berlin [Slashdot]

Tesla's European gigafactory will be built in the Berlin area, Elon Musk said Tuesday during an awards ceremony in Germany. TechCrunch reports: Musk was onstage to receive a Golden Steering Wheel Award given by BILD. "There's not enough time tonight to tell all the details," Musk said during an onstage interview with Volkswagen Group CEO Herbert Diess. "But it's in the Berlin area, and it's near the new airport." Tesla is also going to create an engineering and design center in Berlin because "I think Berlin has some of the best art in the world," Musk said. Diess thanked Musk while onstage for "pushing us" toward electrification. Diess later said that Musk and Telsa are demonstrating that moving toward electrification works. "I don't think Germany is that far behind," Musk said when asked about why German automakers were behind in electric vehicles. He later added that some of the best cars in the world are made in Germany. "Everyone knows that German engineering is outstanding and that's part of the reason we're locating our Gigafactory Europe in Germany," Musk said. On Twitter, Musk said the Berlin-based gigafactory "Will build batteries, powertrains & vehicles, starting with Model Y."

Read more of this story at Slashdot.


Intel Fixes a Security Flaw It Said Was Repaired 6 Months Ago [Slashdot]

An anonymous reader quotes a report from The New York Times: Last May, when Intel released a patch for a group of security vulnerabilities researchers had found in the company's computer processors, Intel implied that all the problems were solved. But that wasn't entirely true, according to Dutch researchers at Vrije Universiteit Amsterdam who discovered the vulnerabilities and first reported them to the tech giant in September 2018. The software patch meant to fix the processor problem addressed only some of the issues the researchers had found. It would be another six months before a second patch, publicly disclosed by the company on Tuesday, would fix all of the vulnerabilities Intel indicated were fixed in May, the researchers said in a recent interview. The public message from Intel was "everything is fixed," said Cristiano Giuffrida, a professor of computer science at Vrije Universiteit Amsterdam and one of the researchers who reported the vulnerabilities. "And we knew that was not accurate." While many researchers give companies time to fix problems before the researchers disclose them publicly, the tech firms can be slow to patch the flaws and attempt to muzzle researchers who want to inform the public about the security issues. Researchers often agree to disclose vulnerabilities privately to tech companies and stay quiet about them until the company can release a patch. Typically, the researchers and companies coordinate on a public announcement of the fix. But the Dutch researchers say Intel has been abusing the process. Now the Dutch researchers claim Intel is doing the same thing again. They said the new patch issued on Tuesday still doesn't fix another flaw they provided Intel in May. The Intel flaws, like other high-profile vulnerabilities the computer security community has recently discovered in computer chips, allowed an attacker to extract passwords, encryption keys and other sensitive data from processors in desktop computers, laptops and cloud-computing servers. Intel says the patches "greatly reduce" the risk of attack, but don't completely fix everything the researchers submitted. The company's spokeswoman Leigh Rosenwald said Intel was publishing a timeline with Tuesday's patch for the sake of transparency. "This is not something that is normal practice of ours, but we realized this is a complicated issue. We definitely want to be transparent about that," she said. "While we may not agree with some of the assertions made by the researchers, those disagreements aside, we value our relationship with them."

Read more of this story at Slashdot.


Intel's Linux Graphics Driver Updated For Denial Of Service + Privilege Escalation Bugs [Phoronix]

Of the 77 security advisories Intel is making public and the three big ones of the performance-sensitive JCC Erratum, the new ZombieLoad TAA (TSX Asynchronous Abort), and iTLB Multihit No eXcuses, there are also two fixes to their kernel graphics driver around security issues separate from the CPU woes...


This November, give thanks for only having one exploited Microsoft flaw for Patch Tues. And four Hyper-V escapes [The Register]

Intel joins the fun with monthly releases from Adobe, SAP

Patch Tuesday  The November edition of Patch Tuesday has landed with scheduled updates from Microsoft, Adobe, and SAP, along with the debut of a new update calendar from Intel.…


Microsoft Starts Rolling Out Windows 10 November 2019 Update [Slashdot]

Microsoft today started rolling out the free Windows 10 November 2019 Update. For those keeping track, this update is Windows 10 build 18363 and will bring Windows 10 to version 1909. From a report: The Windows 10 November 2019 Update (version 1909) is odd because it shares the same Cumulative Update packages as the Windows 10 May 2019 Update (version 1903). That means version 1909 will be delivered more quickly to version 1903 users -- it will install like a monthly security update. The build number will barely change: from build 18362 to build 18363. If two computers have the same servicing content, the build revision number should match: and For developers, this means a new Windows SDK will not be issued in conjunction with this version of Windows (there aren't any new APIs). Again, the Windows 10 November 2019 Update is not a typical release. It's a much smaller update, though it is still worth getting. Windows 10 version 1909 brings improvements to Windows containers, inking latency, and password recovery. User-facing features include letting third-party digital assistants to voice activate above the Lock screen, being able to create events straight from the Calendar flyout on the Taskbar, and displaying OneDrive content in the File Explorer search box. You may also notice some changes to notification management, better performance and reliability on certain CPUs, and battery life and power efficiency improvements.

Read more of this story at Slashdot.


A US Federal Court Finds Suspicionless Searches of Phones at the Border is Illegal [Slashdot]

A federal court in Boston has ruled that the government is not allowed to search travelers' phones or other electronic devices at the U.S. border without first having reasonable suspicion of a crime. From a report: That's a significant victory for civil liberties advocates, who say the government's own rules allowing its border agents to search electronic devices at the border without a warrant are unconstitutional. The court said that the governmentâ(TM)s policies on warrantless searches of devices without reasonable suspicion "violate the Fourth Amendment," which provides constitutional protections against warrantless searches and seizures. The case was brought by 11 travelers -- ten of which are U.S. citizens -- with support from the American Civil Liberties Union and the Electronic Frontier Foundation, who said border agents searched their smartphones and laptops without a warrant or any suspicion of wrongdoing or criminal activity. The border remains a bizarre legal grey area, where the government asserts powers that it cannot claim against citizens or residents within the United States but citizens and travelers are not afforded all of their rights as if they were on U.S. soil. The government has long said it doesn't need a warrant to search devices at the border.

Read more of this story at Slashdot.


Facebook iOS app silently turns on your phone camera. Ah, relax – it's just a bug, lol!? [The Register]

Plus Facebook Pay has launched: Why not give them access to your financial data?

Facebook’s iPhone app has a new feature – and one that netizens aren't too happy about: it opens the phone’s camera app in the background without your knowledge.…


The New Sonic the Hedgehog Movie Trailer is a Giant Relief [Slashdot]

You can almost hear the sigh of relief from the global Sega fan community. The new Sonic the Hedgehog movie trailer, which Paramount released this morning, is a giant improvement. From a report: Our spiky hero no longer looks like a nightmarish experiment in avant garde taxidermy. The human teeth have been extracted. He has big doe eyes, not the sinister mini-peepers of the original trailer. The new design genuinely captures a lot of what original character designer Naoto Ohshima set out to achieve -- a cool but cuddly mascot, infusing Japanese kawaii sensibilities with American attitude. His fur is bright, mimicking the famed Sega blue of the company's classic arcade games. He is no longer absolutely terrifying, an important achievement for a family film.

Read more of this story at Slashdot.


Facebook Unites Payment Service Across Apps With Facebook Pay [Slashdot]

Facebook said on Tuesday it was launching Facebook Pay, a unified payment service through which users across its platforms including WhatsApp and Instagram can make payments without exiting the app. From a report: The social network said the service would allow users to send money or make a payment with security options such as PIN or biometrics on their smartphones. Chief Executive Officer Mark Zuckerberg said earlier this year the company is planning to unify the messaging infrastructure across its platforms. He said the company would encrypt conversations on more of its messaging services and make them compatible as direct messaging was likely to dwarf discussion on the traditional, open platform of Facebook's news feed in a few years. Facebook said the new service will collect user information such as payment method, date, billing and contact details when a transaction is made and that it would use the data to show targeted advertisements to users.

Read more of this story at Slashdot.


Don't trust the Trusted Platform Module – it may leak your VPN server's private key (depending on your configuration) [The Register]

You know what they say: Timing is... everything

Trusted Platform Modules, specialized processors or firmware that protect the cryptographic keys used to secure operating systems, are not entirely trustworthy.…


Linux Kernel Gets Mitigations For TSX Async Abort Plus Another New Issue: iITLB Multihit [Phoronix]

The Linux kernel has just received its mitigation work for the newly-announced TSX Asynchronous Abort (TAA) variant of ZombieLoad plus revealing mitigations for another Intel CPU issue... So today in addition to the JCC Erratum and ZombieLoad TAA the latest is iITLB Multihit (NX) - No eXcuses...


New ZombieLoad Side-Channel Attack Variant: TSX Asynchronous Abort [Phoronix]

In addition to the JCC erratum being made public today and that performance-shifting Intel microcode update affecting Skylake through Cascade Lake, researchers also announced a new ZombieLoad side-channel attack variant dubbed "TSX Asynchronous Abort" or TAA for short...


Hey, you've earned it: Huawei chucks workers a £219m bonus for tackling US blacklist [The Register]

Take the kids somewhere nice

Huawei, America's favourite bogeyman, is to dish out ¥2bn (£219m) as a reward to employees working their arses off on contingency plans to mitigate the anti-China rhetoric coming from the US government.…


True to its name, Intel CPU flaw ZombieLoad comes shuffling back with new variant [The Register]

Boffins say even latest chips can be twisted into leaking data between processor cores

Intel is once again moving to patch its CPU microcode following the revelation of yet another data-leaking side-channel vulnerability.…


Don't miss this patch: Bad Intel drivers give hackers a backdoor to the Windows kernel [The Register]

Alarm raised over more holes in third-party low-level code

Nearly three months after infosec biz Eclypsium highlighted widespread security weaknesses in third-party Windows hardware drivers, you can now add Intel to the list of vendors leaving holes in their all-powerful low-level code.…


Benchmarks Of JCC Erratum: A New Intel CPU Bug With Performance Implications On Skylake Through Cascade Lake [Phoronix]

Intel is today making public the Jump Conditional Code (JCC) erratum. This is a bug involving the CPU's Decoded ICache where on Skylake and derived CPUs where unpredictable behavior could happen when jump instructions cross cache lines. Unfortunately addressing this error in software comes with a performance penalty but ultimately Intel engineers are working to offset that through a toolchain update. Here are the exclusive benchmarks out today of the JCC erratum performance impact as well as when trying to recover that performance through the updated GNU Assembler.


Dell's new converged play PowerOne: It's a bit like VxBlock, but without all the Cisco gubbins [The Register]

It's Dell all the way down with subscription payment models

Dell's new PowerOne converged infrastructure platform will be sold under a subscription and via a metered pricing arrangement that it has called Technology on Demand.…


Mozilla + Intel + Red Hat Form The Bytecode Alliance To Run WebAssembly Everywhere [Phoronix]

Mozilla, Fastly, Intel, and Red Hat have announced the Bytecode Alliance as a new initiative built around WebAssembly and focused on providing a secure-by-default bytecode that can run from web browsers to desktops to IoT/embedded platforms...


From AV to oy-vey: McAfee antivirus has security hole of its own [The Register]

Security suite falls victim to malicious DLLs

Three of McAfee's anti-malware tools have been found to contain a vulnerability that could potentially allow an attacker to bypass its security protections and take control of a PC.…


Saturday Morning Breakfast Cereal - Basilisk [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

I'm informed in the US it's tiny horsies, not tiny ponies. This needs to change.

Today's News:


Gavin Patterson's gravy train keeps on rolling as former BT boss tossed two more sinecures [The Register]

Next stop, Solon and Elixirr – on top of role as Salesforce chair

Former BT chief exec Gavin Patterson is a busy boy. On top of his position as part-time chair of Salesforce, the mullet-sporting, '80s businessman throwback will also be gracing two more companies.…


Vodafone takes €1.9bn punch to wallet thanks to India's decision on airwave licence fees [The Register]

UK profits also down due to opex shift to IBM's cloud

Vodafone reported a loss of €1.9bn (£1.6bn) in its latest half-year results ended 30 September, chalked up mainly to India's decision to change the way it charges telcos for using airwaves.…


LinuxBoot Continues Maturing - Now Able To Boot Windows [Phoronix]

LinuxBoot is approaching two years of age as the effort led by Facebook and others for replacing some elements of the system firmware with the Linux kernel...


DXC's new boss has quite the cleanup ahead after frankenfirm exits Q2 nursing $2bn loss [The Register]

This poisoned chalice had better be delicious

A 10-digit dollar loss for Q2, hundreds of millions in forecast revenue clipped for the second half of the fiscal year, the ownership of business units being reviewed, and an admission that years of redundancies came home to roost.…


Boeing comes clean on parachute borkage as the ISS crew is set to shrink [The Register]

Also: Four RS-25 engines prepare for, at best, a watery grave

Roundup  While astronomers winced and Musk's rocketeers cheered the deployment of another 60 Starlink satellites into Earth orbit, there was plenty of other action in the rocket-bothering world.…


Librsvg Continues Rust Conquest, Pulls In CSS Parsing Code From Mozilla Servo [Phoronix]

For about three years now GNOME's SVG rendering library has been transitioning to Rust. This library, librsvg, now makes further use of Rust around its CSS parsing code and Mozilla's Servo is doing some of that heavy lifting...


Gas-guzzling Americans continue to shun electric vehicles as sales fail to bother US car market [The Register]

While hipster urbanites favour ride hailing and shared scooters

Sales of greener cars remain proportionately minuscule in the US – even Elon Musk's shiny Tesla brand is failing to get more gas-loving Americans to ditch their petrol monsters in favour of something electric-based.…


Coreboot Support Is Being Worked On For Fwupd/LVFS [Phoronix]

In making it more easy to update Coreboot system firmware, the ability to update Coreboot via the Linux Vendor Firmware Service (LVFS) with Fwupd is finally being worked out...


Qualcomm's Adreno 640 GPU Is Working Easily With The Freedreno OpenGL/Vulkan Drivers [Phoronix]

The Adreno 640 GPU that is used by Qualcomm's Snapdragon 855/855+ SoCs is now working with the open-source Freedreno Gallium3D OpenGL and "TURNIP" Vulkan drivers with the newest Mesa 20.0 development code...


Next year's Windows 10 comes bounding into the Slow Ring, which means 19H2 waits in the wings [The Register]

Insiders can no longer jump off the testing train

On the eve of Patch Tuesday, Microsoft began shifting Slow Ring testers onto 2020's Windows 10.…


'Sophisticated' cyber attack on UK Labour Party platforms was probably just a DDoS, says official [The Register]

'Really very everyday' – report

The UK's Labour Party says its campaign site has been the target of "sophisticated and large-scale cyber-attack" and has informed GCHQ's National Cyber Security Centre.…


150 infosec bods now know who they're up against thanks to BT Security cc/bcc snafu [The Register]

Mass-mail fail followed outfit's appearance at jobs fair

BT Security managed to commit the most basic blunder of all after emailing around 150 infosec professionals who attended a jobs fair – using the "cc" field instead of "bcc".…


Canada's OpenText buys SMB backerupper Carbonite for $1.42bn [The Register]

Backup a minute, folks

After weeks of acquisition rumours, Canadian enterprise software pusher OpenText bit the bullet yesterday and swallowed cloud backup and storage service vendor Carbonite for a cool $1.42bn.…


I'm still not that Gary, says US email mixup bloke who hasn't even seen Dartford Crossing [The Register]

Nor is he called Andrew, but he's still getting messages about Dart Charge

Despite El Reg writing about the case of the Ryanair passenger earlier this year who was registered for a flight in error after somebody mistyped his email address, poor old "Not That Gary" has been struck by the same problem again – thanks to someone using a toll bridge in southeast England.…


Without any apparent irony, Google marks Chrome's 'small' role in web ecosystem [The Register]

Chrome Dev Summit also brings resolution of tabs vs. spaces fight, for now

At the Chrome Developer Summit on Monday, Google finally settled the tabs vs. spaces debate and celebrated web community diversity, now at risk of becoming a monoculture thanks to Chrome's market dominance.…


Astroboffins capture video of Mercury passing across the Sun's surface [The Register]

Not gonna happen again before 2032

Mercury, the smallest planet in our Solar System, appeared as a tiny black dot on Monday as it crossed the Sun’s surface in between the Earth and its star.…

Monday, 11 November


Intel's Vulkan Linux Driver Lands Timeline Semaphore Support [Phoronix]

A change to look forward to with Mesa 20.0 due out next quarter is Vulkan timeline semaphore support (VK_KHR_timeline_semaphore) for Intel's "ANV" open-source driver...


GStreamer Conference 2019 Videos Now Available Online [Phoronix]

Taking place at the end of October during the Linux Foundation events in Lyon, France was the GStreamer Conference to align with the annual developer festivities...


KDE Frameworks 5.64 Released [Phoronix]

Sunday marked the release of KDE Frameworks 5.64 as the latest monthly update to this collection of libraries complementing Qt5...


Microsoft embraces California data privacy law – don't expect Google to follow suit [The Register]

Software giant promises to extend protections across US

Microsoft has said that not only will it embrace a new data privacy law in California, due to come into force in the New Year, but will extend the same protections to everyone in the US.…


Google brings its secret health data stockpiling systems to the US [The Register]

Remember the UK DeepMind scandal? No?

Updated  Google is at it again: storing and analyzing the health data of millions of patients without seeking their consent - and claiming it doesn’t need their consent either.…


Apple's credit card caper probed over sexism claims – after women screwed over on limits [The Register]

Blame the algorithms: It's the new 'dog ate my homework'

Apple is being probed by New York’s State Department of Financial Services after angry customers accused the algorithms behind its new credit card, Apple Card, of being sexist against women.…


Despite Windows BlueKeep exploitation freak-out, no one stepped on the gas with patching, say experts [The Register]

Admins snoozing on fixes despite reports of active attacks

The flurry of alerts in recent weeks of in-the-wild exploitation of the Windows RDP BlueKeep security flaw did little to change the rate at which people patched their machines, it seems.…


Google Chrome To Begin Marking Sites That Are Slow / Fast [Phoronix]

Chrome has successfully shamed web-sites not supporting HTTPS and now they are looking to call-out websites that do not typically load fast...


Uber CEO compares pedestrian death to murder of Saudi journalist, saying all should be forgiven [The Register]

Uber PRs missing the days of Travis Kalanick

Opinion  Two years ago, Uber CEO Dara Khosrowshahi was brought in to help the company recover from a long series of ethical and moral lapses. But based on an interview this week, it seems the company’s culture may be rubbing off on him more than he is impacting it.…


Back-2-school hacking: Kaspersky blames pesky script kiddies for rash of DDoS cyber hooliganism [The Register]

Educational institutions main target during September spike

Kasperksy researchers have blamed pesky schoolkids for the big September spike in denial-of-service attacks.…


Tune in this month: El Reg's Chris Mellor talks storage, cloud and much more with Qumulo – and you're all invited [The Register]

Gather round for this must-watch vid podcast

Webcast  The Register's storage editor Chris Mellor will interview Qumulo veep Molly Presley in a webcast set to be streamed on 19 November.…


SpaceX flings another 60 Starlink satellites into orbit in firm's heaviest payload to date [The Register]

Live from Cape Canaveral: El Reg watches Falcon do its stuff while astronomers worry about the skies

The first upgraded batch of Starlink satellites were launched by SpaceX today, marking the fourth reuse of a Falcon 9 booster and the first of a payload fairing.…


If it sounds too good to be true, it most likely is: Nobody can decrypt the Dharma ransomware [The Register]

Not even data recovery companies

A data recovery company is dubiously claiming it has cracked decryption of Dharma ransomware – despite there being no known method of unscrambling its files.…


The Disappointing Direction Of Linux Performance From 4.16 To 5.4 Kernels [Phoronix]

With the Linux 5.4 kernel set to be released in the next week or two, here is a look at the performance going back to the days of Linux 4.16 from early 2018. At least the Linux kernel continues picking up many new features as due to security mitigations and other factors the kernel performance continues trending lower.


Saturday Morning Breakfast Cereal - Coffee [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

This is how he has learned to love his commute.

Today's News:


Any promises to extend rights of self-employed might win an election, hint Brit freelancer orgs [The Register]

Just saying

Political parties should extend the rights of the self-employed ahead of the country's general election on 12 December, including scrapping IR35 off-payroll working rules and addressing late payments.…


Vodafone UK links arms with Openreach to build out its full-fibre network [The Register]

Budge up, CityFibre

Vodafone has inked a deal with BT's Openreach to expand its gigabit broadband network by 500,000 premises – on top of its existing deal with alternative network provider CityFibre for 5 million premises.…


Double downtime: Azure DevOps, Google cloud users put the kettle on [The Register]

Put it all on the cloud, they said…

Microsoft's Azure DevOps is suffering what it describes as "availability degradation" in the UK and Europe and parts of Google's cloud platform are also broken.…


SUSE Continues Working On Linux Core Scheduling For Better Security [Phoronix]

SUSE and other companies like DigitalOcean have been working on Linux core scheduling to make virtualization safer particularly in light of security vulnerabilities like L1TF and MDS. The core scheduling work is about ensuring different VMs don't share a HT sibling but rather only the same VM / trusted applications run on siblings of a core...


237 UK police force staff punished for misusing IT systems in last 2 years [The Register]

Snooping workers blamed for bunch of data breaches

Updated  One UK police staffer is disciplined every three days for breaking data protection rules or otherwise misusing IT systems, according to a Freedom of Information request by think tank Parliament Street.…


Pre-Loaded Linux PCs Continue Increasing - TUXEDO Computers Sets Up New Offices [Phoronix]

From System76 setting up their own manufacturing facility for Linux desktops to Dell offering more Linux laptop options, the demand for pre-loaded Linux PCs continues to increase. One of the smaller Linux PC vendors also now expanding is German-based TUXEDO Computers...


Shortwave Enters Beta As New GNOME Internet Radio Player [Phoronix]

Shortwave is a new Internet radio player built for GNOME with GTK3 and has been in development the past year...


Teachers: Make your pupils' parents buy them an iPad to use at school. Oh and did you pack sunglasses for the Apple-funded jolly? [The Register]

iGiant paid for Irish educators to attend events abroad – report

Apple has reportedly been paying for Irish teachers to attend functions in the US, according to leaked docs.…


'That roar is terrific... look at that rocket go!' It's been 52 years since first Saturn V left the pad [The Register]

Apollo 12 @ 50 is just around the corner, but it wouldn't have happened without Apollo 4

"Our building's shaking here, our building's shaking! Oh it's terrific... the building's shaking! This big blast window is shaking! We're holding it with our hands! Look at that rocket go... enter the clouds at 3,000ft! Look at it going... you can see it, you can see it..."…


What's that, Skippy? A sad-faced Microsoft engineer has arrived with an axe? Skippy? [The Register]

Plus: New toys for Teams, a fresh Visual Studio Code, and more

Roundup  Despite it being Ignite week for much of Microsoft, there was still plenty going on in the house that Bill built.…


Hyphens of mass destruction: When a clumsy finger meant the end for hundreds of jobs [The Register]

From a time before: 'This will do something awful. Are you sure? (Y/N)'

Who, Me?  Welcome back to Who, Me?, The Register's weekly dip into the suspiciously bulging mailbag of reader confessions.…


Understanding “disk space math” [Fedora Magazine]

Everything in a PC, laptop, or server is represented as binary digits (a.k.a. bits, where each bit can only be 1 or 0). There are no characters like we use for writing or numbers as we write them anywhere in a computer’s memory or secondary storage such as disk drives. For general purposes, the unit of measure for groups of binary bits is the byte — eight bits. Bytes are an agreed-upon measure that helped standardize computer memory, storage, and how computers handled data.

There are various terms in use to specify the capacity of a disk drive (either magnetic or electronic). The same measures are applied to a computers random access memory (RAM) and other memory devices that inhabit your computer. So now let’s see how the numbers are made up.

Suffixes are used with the number that specifies the capacity of the device. The suffixes designate a multiplier that is to be applied to the number that preceded the suffix. Commonly used suffixes are:

  • Kilo = 103 = 1,000 (one thousand)
  • Mega = 106 = 1,000,000 (one million)
  • Giga = 109 = 1000,000,000 (one billion)
  • Tera = 1012 = 1,000,000,000,000 (one trillion)

As an example 500 GB (gigabytes) is 500,000,000,000 bytes.

The units that memory and storage are specified in  advertisements, on boxes in the store, and so on are in the decimal system as shown above. However since computers only use binary bits, the actual capacity of these devices is different than the advertised capacity.

You saw that the decimal numbers above were shown with their equivalent powers of ten. In the binary system numbers can be represented as powers of two. The table below shows how bits are used to represent powers of two in an 8 bit Byte. At the bottom of the table there is an example of how the decimal number 109 can be represented as a binary number that can be held in a single byte of 8 bits (01101101).

Eight bit binary number


Bit 7

Bit 6

Bit 5

Bit 4

Bit 3

Bit 2

Bit 1

Bit 0

Power of 2









Decimal Value









Example Number









The example bit values comprise the binary number 01101101. To get the equivalent decimal value just add the decimal values from the table where the bit is set to 1. That is 64 + 32 + 8 + 4 + 1 = 109.

By the time you get out to 230 you have decimal 1,073,741,824 with just 31 bits (don’t forget the 20) You’ve got a large enough number to start specifying memory and storage sizes.

Now comes what you have been waiting for. The table below lists common designations as they are used for labeling decimal and binary values.



KB (Kilobyte)

1KB = 1,000 bytes

KiB (Kibibyte)

1KiB = 1,024 bytes

MB (Megabyte)

1MB = 1,000,000 bytes

MiB (Mebibyte)

1MiB = 1,048,576 bytes

GB (Gigabyte)

1GB = 1,000,000,000 bytes

GiB (Gibibyte)

1 GiB (Gibibyte) = 1,073,741,824 bytes

TB (Terabyte)

1TB = 1,000,000,000,000

TiB (Tebibyte)

1TiB = 1,099,511,627,776 bytes

Note that all of the quantities of bytes in the table above are expressed as decimal numbers. They are not shown as binary numbers because those numbers would be more than 30 characters long.

Most users and programmers need not be concerned with the small differences between the binary and decimal storage size numbers. If you’re developing software or hardware that deals with data at the binary level you may need the binary numbers.

As for what this means to your PC: Your PC will make use of the full capacity of your storage and memory devices. If you want to see the capacity of your disk drives, thumb drives, etc, the Disks utility in Fedora will show you the actual capacity of the storage device in number of bytes as a decimal number.

There are also command line tools that can provide you with more flexibility in seeing how your storage bytes are being used. Two such command line tools are du (for files and directories) and df (for file systems). You can read about these by typing man du or man df at the command line in a terminal window.

Photo by Franck V. on Unsplash.


Hate hub hacked, Cisco bugs squished, Bluekeep attacks begin, and much, much more [The Register]

Plus, rConfig flaw raises alarms

Roundup  Time for a look at some of the other security stories making the rounds in the past week.…

Sunday, 10 November


Ubuntu 20.04 LTS Continuing To Work On Python 2 Removal [Phoronix]

The goal for Ubuntu 20.04 is to ship with Python 2 removed since Py2 will be end-of-life after the start of the year and this next Ubuntu Linux release is a Long-Term Support release, but there still are many Python 2 depending packages left currently in Debian unstable and Ubuntu's "Focal Fossa" archive...


Is this paragraph from Trump or an AI bot? You decide, plus buy your own AI for $399 [The Register]

Also Uber to Waymo - I wish I could quit you!

Roundup  Hello, welcome to this week's roundup of AI news. Read on for a fun and, frankly worrying, quiz that tests if you can tell if something was made up by an AI text generation model or said by Trump, and more.…


Arch Linux Updates Its Kernel Installation Handling [Phoronix]

Arch Linux has updated the behavior when installing the linux, linux-lts, linux-zen, and linux-hardened kernel options on this popular distribution...


Linux 5.4-rc7 Kernel Released With VirtualBox Shared Folder Driver In Place [Phoronix]

Linux 5.4-rc7 was just released as the newest test candidate of the maturing Linux 5.4 kernel. At this stage it's looking like an eighth weekly RC will be warranted next weekend before officially releasing Linux 5.4.0 on 24 November...


Windows 10 vs. Ubuntu 19.10 vs. Clear Linux Performance On The Dell Ice Lake Laptop [Phoronix]

Last month I posted benchmarks looking at the Windows 10 vs. Linux OpenGL and Vulkan graphics performance for the Ice Lake "Gen11" graphics. But for those wondering about the CPU/system performance between Windows and Linux for the Core i7-1065G7 with the Dell XPS 7390, here are those benchmarks as we compare the latest Windows 10 to Ubuntu 19.10 and Intel's own Clear Linux platform.


Steam For Linux Beta Adds Experimental Namespaces/Containers Support [Phoronix]

Longtime Linux game developer Timothee Besset has outlined the support introduced by Valve this week in their latest Steam Linux client beta for supporting Linux namespaces / containers. This experimental functionality may in the end provide better support for 32-bit compatibility as more Linux distributions focus solely on x86_64 packages, reducing some of the fragmentation/library conflicts between some Linux distributions and Steam, and other headaches currently plaguing the Steam Linux space...


Reiser4 File-System Is Still Ticking In 2019 - Now Updated For Linux 5.3 Compatibility [Phoronix]

While Linux 5.4 is rolling out in about two weeks, the out-of-tree Reiser4 file-system has just been updated for Linux 5.3 support...


Virtual KMS Driver To Work On Virtual Refresh Rate Support (FreeSync) [Phoronix]

Over the past year and a half the VKMS Linux DRM driver has come together as the "virtual kernel mode-setting" implementation for headless systems and other environments not backed by a physical display. Interestingly being tacked on their TODO list now is VRR (Variable Refresh Rate) support. Separately, the prominent VKMS developer is now employed by AMD...


Saturday Morning Breakfast Cereal - Abduction [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

There's a good ten trillion word thinkpiece to be written about how all conspiracy theories are at their core delusions of grandeur.

Today's News:


November Is Still Bringing Many Interesting Linux Benchmarks / Milestones [Phoronix]

Pardon for the rather slow pace of new Phoronix content over the past week (in particular, the lack of big benchmark articles) due to my wife giving birth early and being in the hospital for a few days, but the remainder of November is set to be quite exciting on the Linux/open-source performance front. Here is some of what else is on tap for November...


OpenZFS Developer Summit 2019 Videos + Slides For The Latest On Open-Source ZFS [Phoronix]

Taking place 4 and 5 November in San Francisco was the OpenZFS Developer Summit. This two-day open-source ZFS developer summit made possible by Intel, Delphix, Datto, and OSNexus had a lot of interesting presentations from the state of ZFS TRIM/Discard to debugging topics...


Thunderbolt 3 Software Connection Manager Support Coming In Linux 5.5 For Apple Hardware [Phoronix]

The Thunderbolt changes have been merged to char-misc ahead of the upcoming Linux 5.5 merge window...

Saturday, 09 November


Saturday Morning Breakfast Cereal - Hey [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

I wonder how long until you just shout Fried Cheese and a drone gently places it into your face.

Today's News:

Friday, 08 November


Saturday Morning Breakfast Cereal - Gatekeeping [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

I was behaving like a man-child even before I was a man!

Today's News:


Managing software and services with Cockpit [Fedora Magazine]

The Cockpit series continues to focus on some of the tools users and administrators can use to perform everyday tasks within the web user-interface. So far we’ve covered introducing the user-interface, storage and network management, and user accounts. Hence, this article will highlight how Cockpit handles software and services.

The menu options for Applications and Software Updates are available through Cockpit’s PackageKit feature. To install it from the command-line, run:

 sudo dnf install cockpit-packagekit

For Fedora Silverblue, Fedora CoreOS, and other ostree-based operating systems, install the cockpit-ostree package and reboot the system:

sudo rpm-ostree install cockpit-ostree; sudo systemctl reboot

Software updates

On the main screen, Cockpit notifies the user whether the system is updated, or if any updates are available. Click the Updates Available link on the main screen, or Software Updates in the menu options, to open the updates page.

RPM-based updates

The top of the screen displays general information such as the number of updates and the number of security-only updates. It also shows when the system was last checked for updates, and a button to perform the check. Likewise, this button is equivalent to the command sudo dnf check-update.

Below is the Available Updates section, which lists the packages requiring updates. Furthermore, each package displays the name, version, and best of all, the severity of the update. Clicking a package in the list provides additional information such as the CVE, the Bugzilla ID, and a brief description of the update. For details about the CVE and related bugs, click their respective links.

Also, one of the best features about Software Updates is the option to only install security updates. Distinguishing which updates to perform makes it simple for those who may not need, or want, the latest and greatest software installed. Of course, one can always use Red Hat Enterprise Linux or CentOS for machines requiring long-term support.

The example below demonstrates how Cockpit applies RPM-based updates.

Running system updates with RPM-based operating systems in Cockpit.

OSTree-based updates

The popular article What is Silverblue states:

OSTree is used by rpm-ostree, a hybrid package/image based system… It atomically replicates a base OS and allows the user to “layer” the traditional RPM on top of the base OS if needed.

Because of this setup, Cockpit uses a snapshot-like layout for these operating systems. As seen in the demo below, the top of the screen displays the repository (fedora), the base OS image, and a button to Check for Updates.

Clicking the repository name (fedora in the demo below) opens the Change Repository screen. From here one can Add New Repository, or click the pencil icon to edit an existing repository. Editing provides the option to delete the repository, or Add Another Key. To add a new repository, enter the name and URL. Also, select whether or not to Use trusted GPG key.

There are three categories that provide details of its respective image: Tree, Packages, and Signature. Tree displays basic information such as the operating system, version of the image, how long ago it was released, and the origin of the image. Packages displays a list of installed packages within that image. Signature verifies the integrity of the image such as the author, date, RSA key ID, and status.

The current, or running, image displays a green check-mark beside it. If something happens, or an update causes an issue, click the Roll Back and Reboot button. This restores the system to a previous image.

Running system updates with OSTree-based operating systems in Cockpit.


The Applications screen displays a list of add-ons available for Cockpit. This makes it easy to find and install the plugins required by the user. At the time of this article, some of the options include the 389 Directory Service, Fleet Commander, and Subscription Manager. The demo below shows a complete list of available Cockpit add-ons.

Also, each item displays the name, a brief description, and a button to install, or remove, the add-on. Furthermore, clicking the item displays more information (if available). To refresh the list, click the icon at the top-right corner.

Managing Cockpit application add-ons and features

Subscription Management

Subscription managers allow admins to attach subscriptions to the machine. Even more, subscriptions give admins control over user access to content and packages. One example of this is the famous Red Hat subscription model. This feature works in relation to the subscription-manager command

The Subscriptions add-on can be installed via Cockpit’s Applications menu option. It can also be installed from the command-line with:

sudo dnf install cockpit-subscriptions

To begin, click Subscriptions in the main menu. If the machine is currently unregistered, it opens the Register System screen. Next, select the URL. You can choose Default, which uses Red Hat’s subscription server, or enter a Custom URL. Enter the Login, Password, Activation Key, and Organization ID. Finally, to complete the process, click the Register button.

The main page for Subscriptions show if the machine is registered, the System Purpose, and a list of installed products.

Managing subscriptions in Cockpit


To start, click the Services menu option. Because Cockpit uses systemd, we get the options to view System Services, Targets, Sockets, Timers, and Paths. Cockpit also provides an intuitive interface to help users search and find the service they want to configure. Services can also be filtered by it’s state: All, Enabled, Disabled, or Static. Below this is the list of services. Each row displays the service name, description, state, and automatic startup behavior.

For example, let’s take bluetooth.service. Typing bluetooth in the search bar automatically displays the service. Now, select the service to view the details of that service. The page displays the status and path of the service file. It also displays information in the service file such as the requirements and conflicts. Finally, at the bottom of the page, are the logs pertaining to that service.

Also, users can quickly start and stop the service by toggling the switch beside the service name. The three-dots to the right of that switch expands those options to Enable, Disable, Mask/Unmask the service

To learn more about systemd, check out the series in the Fedora Magazine starting with What is an init system?

Managing services in Cockpit

In the next article we’ll explore the security features available in Cockpit.

Thursday, 07 November


Inside TensorFlow [Yelp Engineering and Product Blog]

Inside TensorFlow It’s probably not surprising that Yelp utilizes deep neural networks in its quest to connect people with great local businesses. One example is the selection of photos you see in the Yelp app and website, where neural networks try to identify the best quality photos for the business displayed. A crucial component of our deep learning stack is TensorFlow (TF). In the process of deploying TF to production, we’ve learned a few things that may not be commonly known in the Data Science community. TensorFlow’s success stems not only from its popularity within the machine learning domain, but...


Saturday Morning Breakfast Cereal - Note [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

Really, you should just have a roll of these printed, just in case.

Today's News:


Tuning your bash or zsh shell on Fedora Workstation and Silverblue [Fedora Magazine]

This article shows you how to set up some powerful tools in your command line interpreter (CLI) shell on Fedora. If you use bash (the default) or zsh, Fedora lets you easily setup these tools.


Some installed packages are required. On Workstation, run the following command:

sudo dnf install git wget curl ruby ruby-devel zsh util-linux-user redhat-rpm-config gcc gcc-c++ make

On Silverblue run:

sudo rpm-ostree install git wget curl ruby ruby-devel zsh util-linux-user redhat-rpm-config gcc gcc-c++ make

Note: On Silverblue you need to restart before proceeding.


You can give your terminal a new look by installing new fonts. Why not fonts that display characters and icons together?


Open a new terminal and type the following commands:

git clone --depth=1 ~/.nerd-fonts
cd .nerd-fonts 
sudo ./


On Workstation, install using the following command:

sudo dnf install fontawesome-fonts

On Silverblue, type:

sudo rpm-ostree install fontawesome-fonts


Powerline is a statusline plugin for vim, and provides statuslines and prompts for several other applications, including bash, zsh, tmus, i3, Awesome, IPython and Qtile. You can find more information about powerline on the official documentation site.


To install powerline utility on Fedora Workstation, open a new terminal and run:

sudo dnf install powerline vim-powerline tmux-powerline powerline-fonts

On Silverblue, the command changes to:

sudo rpm-ostree install powerline vim-powerline tmux-powerline powerline-fonts

Note: On Silverblue, before proceeding you need restart.

Activating powerline

To make the powerline active by default, place the code below at the end of your ~/.bashrc file

if [ -f `which powerline-daemon` ]; then
  powerline-daemon -q
  . /usr/share/powerline/bash/

Finally, close the terminal and open a new one. It will look like this:


Oh-My-Zsh is a framework for managing your Zsh configuration. It comes bundled with helpful functions, plugins, and themes. To learn how set Zsh as your default shell this article.


Type this in the terminal:

sh -c "$(curl -fsSL"

Alternatively, you can type this:

sh -c "$(wget -O -)"

At the end, you see the terminal like this:

Congratulations, Oh-my-zsh is installed.


Once installed, you can select your theme. I prefer to use the Powerlevel10k. One advantage is that it is 100 times faster than powerlevel9k theme. To install run this line:

git clone ~/.oh-my-zsh/themes/powerlevel10k

And set ZSH_THEME in your ~/.zshrc file


Close the terminal. When you open the terminal again, the Powerlevel10k configuration wizard will ask you a few questions to configure your prompt properly.

After finish Powerline10k configuration wizard, your prompt will look like this:

If you don’t like it. You can run the powerline10k wizard any time with the command p10k configure.

Enable plug-ins

Plug-ins are stored in .oh-my-zsh/plugins folder. You can visit this site for more information. To activate a plug-in, you need edit your ~/.zshrc file. Install plug-ins means that you are going create a series of aliases or shortcuts that execute a specific function.

For example, to enable the firewalld and git plugins, first edit ~/.zshrc:

plugins=(firewalld git)

Note: use a blank space to separate the plug-ins names list.

Then reload the configuration

source ~/.zshrc 

To see the created aliases, use the command:

alias | grep firewall

Additional configuration

I suggest the install syntax-highlighting and syntax-autosuggestions plug-ins.

git clone ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting
git clone ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions

Add them to your plug-ins list in your file ~/.zshrc

plugins=( [plugins...] zsh-syntax-highlighting zsh-autosuggestions)

Reload the configuration

source ~/.zshrc 

See the results:

Colored folders and icons

Colorls is a Ruby gem that beautifies the terminal’s ls command, with colors and font-awesome icons. You can visit the official site for more information.

Because it’s a ruby gem, just follow this simple step:

sudo gem install colorls

To keep up to date, just do:

sudo gem update colorls

To prevent type colorls everytime you can make aliases in your ~/.bashrc or ~/.zshrc.

alias ll='colorls -lA --sd --gs --group-directories-first'
alias ls='colorls --group-directories-first'

Also, you can enable tab completion for colorls flags, just entering following line at end of your shell configuration:

source $(dirname $(gem which colorls))/

Reload it and see what it happens:


Quoi de neuf en Francophonie? [The Cloudflare Blog]

Retour sur le premier événement Cloudflare pour nos clients et prospects francophones

Quoi de neuf en Francophonie?

Cloudflare en France, Belgique et Suisse, ce sont plus d’une centaine de clients Enterprise, plusieurs milliers d’organisations sur les plans en self-service et une équipe de plus de quinze personnes pour accompagner nos clients francophones dans la gestion technique et commerciale de leur compte (Business Development Representatives, Customer Success Managers, Account Executives, Solutions Engineers, Support Engineers). L’année dernière encore, cette équipe ne comptait que cinq personnes.

Quoi de neuf en Francophonie?
Une partie de l'équipe Cloudflare 
Quoi de neuf en Francophonie?
Présentation générale de Cloudflare par David Lallement

Car nos clients aussi grandissent ! Des start-ups telles qu’Happn ou Back Market aux grands groupes tels que Solocal-Pages Jaunes en passant par les ONGs ou les organisations du secteur public comme L’Union Européenne de Radio-télévision, Cloudflare séduit de plus en plus en francophonie.

Cet événement était l’occasion pour cette communauté grandissante de prospects et clients de se rencontrer, s’écouter et échanger sur leurs problématiques, qu’elles soient d’ailleurs liées directement ou non à Cloudflare. Le tout dans un cadre convivial et détendu avec vue sur les Champs-Élysées depuis le balcon ensoleillé de la Maison du Danemark. Sans oublier quelques petits fours et un peu de vin (nous sommes en France après tout !).

Quoi de neuf en Francophonie?
Petit-déjeuner d'accueil
Quoi de neuf en Francophonie?
Vue sur les Champs-Élysées
Quoi de neuf en Francophonie?
Pause amicale avec notre client Deindeal, entreprise responsable des deux plates-formes e-commerce leaders des ventes flash en Suisse, et
Quoi de neuf en Francophonie?
Pause sur le balcon de la Maison du Danemark

Mais assez des détails logistiques - de quoi avons-nous parlé pendant cette matinale ?

Au programme :

  • Présentation de Cloudflare et des nouveaux produits
  • Internet et le réseau Cloudflare
  • Présentation de Marlin Cloud sur l’implémentation de leur client AB Inbev
  • Présentation de Solocal-Pages Jaunes sur leur sélection et mise en place de Cloudflare
  • Panel clients avec retours d’expérience de nos clients Back Market, Oscaro et Ankama

Durant la première partie de la matinée, David Lallement (Account Executive, Cloudflare) a présenté Cloudflare en mettant l’accent sur les dernières nouveautés et développements à venir.

Quoi de neuf en Francophonie?
Présentation sur les nouveaux produits Cloudflare par David Lallement

Étienne Labaume, représentant l'équipe réseau de Cloudflare, a ensuite brièvement présenté l'architecture de l'Internet et Netperf, la solution conçue par Cloudflare qui a drastiquement réduit les erreurs 522.

Quoi de neuf en Francophonie?
Étienne Labaume, présentation sur le réseau Cloudflare 
Quoi de neuf en Francophonie?
Pourcentage de requêtes 522 vers l'origine

Enfin, Jürgen Coetsiers, co-fondateur de Marlin Cloud nous a expliqué, à travers l’exemple du RGPD, comment son entreprise a pu accompagner ses clients, dont un grand groupe multi-marques, à remplir leurs exigences conformité. Grâce aux solutions Cloudflare, notamment Access, Jürgen a présenté un ensemble de bonnes pratiques d’automatisation qui permettent aux applications multi-cloud de rentrer en conformité et de le rester.

Quoi de neuf en Francophonie?
Présentation sur le cas de leur client AB InBev par Jürgen Coetsiers

Durant la deuxième partie, Loïc Troquet, responsable du pôle Excellence Technique pour, a présenté leur parcours client et pourquoi ils ont choisi Cloudflare comme fournisseur de solutions de sécurité et de performance dans le cadre de leur transformation vers le « Cloud ». Loïc a également présenté des statistiques accessibles depuis leur tableau de bord, telles que la performance du cache et les analytiques du firewall, afin d'expliquer le rôle déterminant de ces données pour l'atteinte de leurs objectifs.

Quoi de neuf en Francophonie?
Présentation de Loïc Troquet sur la mise en place de Cloudflare pour Solocal-Pages Jaunes

Enfin, un panel clients nous a permis de partager les retours d’expérience de Théotime Lévèque, responsable DevOps chez Back Market, Sébastien Aperghis-Tramoni, ingénieur systèmes chez Oscaro et Samuel Delplace, CIO chez Ankama.

Quoi de neuf en Francophonie?
De gauche à droite : Théotime, Sébastien et Samuel partagent leurs retours d'expérience 

Ces présentations étaient ponctuées de questions et échanges informels entre les intervenants et l’audience, sans aucune forme de censure. Nous avons insisté pour que nos clients puissent parler librement de Cloudflare et partager leurs retours, qu’ils soient positifs ou négatifs.

Le résultat : une communauté de clients et prospects qui sentent qu’ils peuvent prendre la parole et s’approprier un événement avant tout organisé pour eux. Les retours récoltés auprès des intervenants et participants nous ont permis de confirmer l’importance de cette transparence, au coeur de la mission de Cloudflare.

Quelques témoignages :

« Pour nous, le meetup a permis de sortir la tête du guidon et de découvrir des fonctionnalités qui jusqu'ici n'avait pas suscité notre intérêt. La rencontre avec l'équipe Cloudflare et des clients existants nous a amené à explorer certaines fonctionnalités pour des cas concrets (Cloudflare Access, Image Resizing...). »

Sébastien Aperghis-Tramoni, Oscaro

« Je suis ravi d’avoir participé à cet évènement qui permet également de s’inspirer d’utilisations de Cloudflare dans d’autres contextes. »

Loïc Troquet, Solocal-Pages Jaunes

« Cette journée clients a été très enrichissante et nous a permis de partager des bonnes pratiques à travers différents cas d’usage des solutions Cloudflare. Cela nous a ouvert à de nouvelles opportunités et nouveaux défis à relever. »

Pascal Binard, Marlin Cloud

« J’ai participé à l’événement afin de me tenir informé des dernières évolutions de la plate-forme dans un format plus condensé que les articles du blog que je n’ai pas toujours le temps de suivre, et également pour rencontrer les clients. Il est en effet très important pour nous de pouvoir nous appuyer les uns sur les autres et d’échanger nos retours d’expériences. J’ai également apprécié que ma participation au panel ne soit pas censurée et de pouvoir partager les bons comme moins bons retours. »

Théotime Lévèque, Back Market

Quoi de neuf en Francophonie?
L'équipe « Customer Success » France de Cloudflare, de gauche à droite : Valentine, Lorène et David

Lors de notre enquête, les participants ont indiqué avoir particulièrement apprécié les opportunités d’échanges avec les clients, prospects et l’équipe Cloudflare, les mises à jour sur les nouveaux produits et développements futurs. Certains ont cependant aussi indiqué avoir souhaité davantage de détails sur la feuille de route. Nous allons prendre ces retours en compte pour la définition du contenu de notre prochain événement. N’hésitez pas à nous contacter si vous avez d’autres suggestions pour nous aider à améliorer la prochaine rencontre. Nous espérons vous y voir nombreux !

Globe-trotter ? N’hésitez pas à venir nous voir lors des prochains événements européens à Amsterdam ou encore Manchester.

Envie de rejoindre l’équipe Cloudflare ? N’hésitez pas à postuler ici !

Quelques mots sur nos intervenants et leur entreprise :

Jürgen Cotsiers, Co-fondateur,
Marlin Marketing Cloud est un logiciel SaaS permettant d’assurer le succès de votre marketing digital. Pendant que votre agence crée et développe le contenu de votre prochaine campagne digitale, grâce à Marlin, vous gardez le contrôle de tous les aspects, de l’hébergement à la sécurité en passant par le monitoring et les considérations légales. Cela vous permet d’assurer votre conformité vis-à-vis de l’ensemble des réglementations qu’elles soient internes ou gouvernementales (RGPD,…). À travers nos différents services, nous mettons à votre disposition notre vaste expérience associée aux bonnes pratiques sectorielles. Que ce soit par le biais d’une formation, d’un programme d’agence de certification ou de notre service d’aide en ligne 24/7.

Loïc Troquet, Responsable du pôle Excellence Technique pour, Solocal
Les activités Internet du Groupe Solocal s’articulent autour de deux lignes produits : Search Local et Marketing Digital. Avec le Search Local, le Groupe offre des services et des solutions digitales aux entreprises pour accroître leur visibilité et développer leurs contacts. Fort de son expertise, SoLocal Group compte aujourd’hui près de 490 000 clients et plus de 2,4 milliards de visites via ses 3 marques phares (PagesJaunes, Mappy et Ooreka), mais également par le biais de ses partenariats.

Théotime Leveque, Responsable DevOps chez Back Market,
Créée en novembre 2014, Back Market est la première place de marché qui permet aux consommateurs d'accéder à des milliers de produits techs remis à neuf, par des professionnels certifiés. Ces produits électriques et électroniques sont garantis de 6 à 24 mois, à des prix imbattables. Ouvert actuellement dans 6 pays, notre mission est de rendre mainstream la consommation de produits techs reconditionnés partout dans le monde.

Sébastien Aperghis-Tramoni, ingénieur systèmes chez Oscaro
Oscaro est le leader européen de la vente en ligne de pièces automobiles d’origine. Le Groupe n’a de cesse de faire évoluer l’outil ayant fait sa renommée : son catalogue unique de pièces détachées, le plus vaste et le plus complet du marché, avec près d'1 million de références. L’enjeu : la bonne pièce pour le bon véhicule pour la bonne personne.

Samuel Delplace, CIO chez Ankama
Créé en 2001, Ankama est aujourd'hui un groupe indépendant de création numérique spécialisé dans le domaine du divertissement et incontournable dans le monde du jeu vidéo ! Depuis le succès phénoménal en 2004, du jeu en ligne DOFUS (85 millions de comptes ont été créés dans le monde, dont plus de 40 millions en France). Ankama a investi plusieurs domaines d'activité pour devenir un véritable groupe transmédia.

Wednesday, 06 November


Saturday Morning Breakfast Cereal - Destroy [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

If you ever want to see some sheer brutality, watch an ecologist come across a nest of invasives.

Today's News:


What’s new with Workers KV? [The Cloudflare Blog]

What’s new with Workers KV?
What’s new with Workers KV?

The Storage team here at Cloudflare shipped Workers KV, our global, low-latency, key-value store, earlier this year. As people have started using it, we’ve gotten some feature requests, and have shipped some new features in response! In this post, we’ll talk about some of these use cases and how these new features enable them.


We’ve shipped some new APIs, both via, as well as inside of a Worker. The first one provides the ability to upload and delete more than one key/value pair at once. Given that Workers KV is great for read-heavy, write-light workloads, a common pattern when getting started with KV is to write a bunch of data via the API, and then read that data from within a Worker. You can now do these bulk uploads without needing a separate API call for every key/value pair. This feature is available via, but is not yet available from within a Worker.

For example, say we’re using KV to redirect legacy URLs to their new homes. We have a list of URLs to redirect, and where they should redirect to. We can turn this list into JSON that looks like this:

    "key": "/old/post/1",
    "value": "/new-post-slug-1"
    "key": "/old/post/2",
    "value": "/new-post-slug-2"

And then POST this JSON to the new bulk endpoint, /storage/kv/namespaces/:namespace_id/bulk. This will add both key/value pairs to our namespace.

Likewise, if we wanted to drop support for these redirects, we could issue a DELETE that has this body:


to /storage/kv/namespaces/:namespace_id/bulk, and we’d delete both key/value pairs in a single call to the API.

The bulk upload API has one more trick up its sleeve: not all data is a string. For example, you may have an image as a value, which is just a bag of bytes. if you need to write some binary data, you’ll have to base64 the value’s contents so that it’s valid JSON. You’ll also need to set one more key:

    "key": "profile-picture",
    "value": "aGVsbG8gd29ybGQ=",
    "base64": true

Workers KV will decode the value from base64, and then store the resulting bytes.

Beyond bulk upload and delete, we’ve also given you the ability to list all of the keys you’ve stored in any of your namespaces, from both the API and within a Worker. For example, if you wrote a blog powered by Workers + Workers KV, you might have each blog post stored as a key/value pair in a namespace called “contents”. Most blogs have some sort of “index” page that lists all of the posts that you can read. To create this page, we need to get a listing of all of the keys, since each key corresponds to a given post. We could do this from within a Worker by calling list() on our namespace binding:

const value = await contents.list()

But what we get back isn’t only a list of keys. The object looks like this:

  keys: [
    { name: "Title 1” },
    { name: "Title 2” }
  list_complete: false,
  cursor: "6Ck1la0VxJ0djhidm1MdX2FyD"

We’ll talk about this “cursor” stuff in a second, but if we wanted to get the list of titles, we’d have to iterate over the keys property, and pull out the names:

const keyNames = =>

keyNames would be an array of strings:

[“Title 1”, “Title 2”, “Title 3”, “Title 4”, “Title 5”]

We could take keyNames and those titles to build our page.

So what’s up with the list_complete and cursor properties? Well, imagine that we’ve been a very prolific blogger, and we’ve now written thousands of posts. The list API is paginated, meaning that it will only return the first thousand keys. To see if there are more pages available, you can check the list_complete property. If it is false, you can use the cursor to fetch another page of results. The value of cursor is an opaque token that you pass to another call to list:

const value = await NAMESPACE.list()
const cursor = value.cursor
const next_value = await NAMESPACE.list({"cursor": cursor})

This will give us another page of results, and we can repeat this process until list_complete is true.

Listing keys has one more trick up its sleeve: you can also return only keys that have a certain prefix. Imagine we want to have a list of posts, but only the posts that were made in October of 2019. While Workers KV is only a key/value store, we can use the prefix functionality to do interesting things by filtering the list. In our original implementation, we had stored the titles of keys only:

  • Title 1
  • Title 2

We could change this to include the date in YYYY-MM-DD format, with a colon separating the two:

  • 2019-09-01:Title 1
  • 2019-10-15:Title 2

We can now ask for a list of all posts made in 2019:

const value = await NAMESPACE.list({"prefix": "2019"})

Or a list of all posts made in October of 2019:

const value = await NAMESPACE.list({"prefix": "2019-10"})

These calls will only return keys with the given prefix, which in our case, corresponds to a date. This technique can let you group keys together in interesting ways. We’re looking forward to seeing what you all do with this new functionality!

Relaxing limits

For various reasons, there are a few hard limits with what you can do with Workers KV. We’ve decided to raise some of these limits, which expands what you can do.

The first is the limit of the number of namespaces any account could have. This was previously set at 20, but some of you have made a lot of namespaces! We’ve decided to relax this limit to 100 instead. This means you can create five times the number of namespaces you previously could.

Additionally, we had a two megabyte maximum size for values. We’ve increased the limit for values to ten megabytes. With the release of Workers Sites, folks are keeping things like images inside of Workers KV, and two megabytes felt a bit cramped. While Workers KV is not a great fit for truly large values, ten megabytes gives you the ability to store larger images easily. As an example, a 4k monitor has a native resolution of 4096 x 2160 pixels. If we had an image at this resolution as a lossless PNG, for example, it would be just over five megabytes in size.

KV browser

Finally, you may have noticed that there’s now a KV browser in the dashboard! Needing to type out a cURL command just to see what’s in your namespace was a real pain, and so we’ve given you the ability to check out the contents of your namespaces right on the web. When you look at a namespace, you’ll also see a table of keys and values:

What’s new with Workers KV?

The browser has grown with a bunch of useful features since it initially shipped. You can not only see your keys and values, but also add new ones:

What’s new with Workers KV?

edit existing ones:

What’s new with Workers KV?

...and even upload files!

What’s new with Workers KV?

You can also download them:

What’s new with Workers KV?

As we ship new features in Workers KV, we’ll be expanding the browser to include them too.

Wrangler integration

The Workers Developer Experience team has also been shipping some features related to Workers KV. Specifically, you can fully interact with your namespaces and the key/value pairs inside of them.

For example, my personal website is running on Workers Sites. I have a Wrangler project named “website” to manage it. If I wanted to add another namespace, I could do this:

$ wrangler kv:namespace create new_namespace
Creating namespace with title "website-new_namespace"
Success: WorkersKvNamespace {
    id: "<id>",
    title: "website-new_namespace",

Add the following to your wrangler.toml:

kv-namespaces = [
    { binding = "new_namespace", id = "<id>" }

I’ve redacted the namespace IDs here, but Wrangler let me know that the creation was successful, and provided me with the configuration I need to put in my wrangler.toml. Once I’ve done that, I can add new key/value pairs:

$ wrangler kv:key put "hello" "world" --binding new_namespace

And read it back out again:

> wrangler kv:key get "hello" --binding new_namespace

If you’d like to learn more about the design of these features, “How we design features for Wrangler, the Cloudflare Workers CLI” discusses them in depth.

More to come

The Storage team is working hard at improving Workers KV, and we’ll keep shipping new stuff every so often. Our updates will be more regular in the future. If there’s something you’d particularly like to see, please reach out!

Tuesday, 05 November


Serverlist October: GitHub Actions, Deployment Best Practices, and more [The Cloudflare Blog]

Serverlist October: GitHub Actions, Deployment Best Practices, and more

Check out our ninth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.

Sign up below to have The Serverlist sent directly to your mailbox.


Saturday Morning Breakfast Cereal - Talk [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

I think I now have enough to do an entire compendium of damaging sex talks. Hooray?

Today's News:

Monday, 04 November


Sailfish OS Torronsuo is now available [Jolla Blog]

Sailfish OS 3.2.0 Torronsuo is a substantial release introducing updated hardware adaptation support, which enables us to bring Sailfish X to newer generation devices like the Sony Xperia 10. The Xperia 10 is also the first device to come with user data encryption enabled by default, and with SELinux, Security-Enhanced Linux, access control framework enabled. We’ll be rolling out SELinux policies in phases. For now Torronsuo introduces SELinux policies for display control (MCE), device startup and background services (systemd), and more will follow in upcoming releases. We have a few details of the Xperia 10 support to finalise, and will announce Sailfish X for the Sony Xperia 10 within the upcoming weeks.

Torronsuo National Park is in the Tavastia Proper region of Finland. This park is valuable for its birdlife and butterfly species. Roughly a hundred species nest in the area. Part of the birds and insects are species that typically live in the northern areas, and they aren’t seen much elsewhere in southern Finland.


Calling Experience

We have also improved the calling experience in co-operation with our partner OMP, who is developing Aurora OS. For incoming calls the country of the caller is now displayed if the call is coming from abroad. The call ending flow has been redesigned from a full-screen dialog to a less intrusive and more light-weight call ending popup, and you can now set a reminder to call someone back when receiving a call or from the call history view in the Phone app via a long tap on the caller name. More improvements to the call experience are also in the pipeline for upcoming releases. For example we are currently working on improving the one hand usage of the call ending popup.



Onboarding Experience

We are continuously improving the onboarding experience for new users. For example, pulley menu indications have been refined to make the menu easier to spot. Feedback showed that after deleting a note or contact some new users waited for the remorse timer to complete before continuing with other tasks. This prompted us to simplify the content deletion use cases across the operating system.


Clock app

Torronsuo also includes updates to the Clock app, which enjoys a bunch of enhancements and bug fixes. You can set the alarm snooze interval in Settings > Apps > Clock. Timers can now be configured more accurately to the nearest second and you can reset the progress of all saved timers with one pulley menu action.

And many more

You’ll find a whole host of other improvements elsewhere too. Battery notifications have been calmed down so they’ll now appear less frequently and contacts search works better in case you have a lot of contacts synced to the device. Android app opening is more reliable and Android contacts performance is notably improved. Twitter works more smoothly with the Sailfish Browser. Editing WLAN networks now offers more enterprise EAP options which were previouslly only accessible from the connection dialog. Many connectivity issues have been fixed, including OpenVPN certificate authentication. Along with Torronsuo we have updated Sailfish OS SDK to version 2.3.

Warm thanks to our partner OMP for the support and for co-developing many of the core improvements with Jolla for Torronsuo. We hope you enjoy Torronsuo as much as we had fun making it. Big things are on the horizon for Sailfish OS, and we are excited to have you with us on the ride. 🙂

For more information please read the release notes.

The post Sailfish OS Torronsuo is now available appeared first on Jolla Blog.


Saturday Morning Breakfast Cereal - Flow [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

We're going to keep giving you busywork until you achieve a state of pure transcendence as you move beyond Self.

Today's News:

Last day to buy the new book and have it count toward the first-week sales total. Thanks, geeks!


The Project Jengo Saga: How Cloudflare Stood up to a Patent Troll – and Won! [The Cloudflare Blog]

The Project Jengo Saga: How Cloudflare Stood up to a Patent Troll – and Won!
The Project Jengo Saga: How Cloudflare Stood up to a Patent Troll – and Won!

Remember 2016? Pokemon Go was all the rage, we lost Prince, and there were surprising election results in both the UK and US. Back in 2016, Blackbird Technologies was notorious in the world of patent litigation. It was a boutique law firm that was one of the top ten most active patent trolls, filing lawsuits against more than 50 different defendants in a single year.

In October 2016, Blackbird was looking to acquire additional patents for their portfolio when they found an incredibly broad software patent with the ambiguous title, “PROVIDING AN INTERNET THIRD PARTY DATA CHANNEL.” They acquired this patent from its owner for $1 plus “other good and valuable consideration.” A little later, in March 2017, Blackbird decided to assert that patent against Cloudflare.

As we have explained previously, patent trolls benefit from a problematic incentive structure that allows them to take vague or abstract patents that they have no intention of developing and assert them as broadly as possible. Instead, these trolls collect licensing fees or settlements from companies who are otherwise trying to start a business, produce useful products, and create good jobs. Companies facing such claims usually convince themselves that settlements in the tens or hundreds of thousands of dollars are quicker and cheaper outcomes than facing years of litigation and millions of dollars in attorneys fees.  

The following is how we worked to upend this asymmetric incentive structure.  

The Game Plan

After we were sued by Blackbird, we decided that we wouldn’t roll over. We decided we would do our best to turn the incentive structure on its head and make patent trolls think twice before attempting to take advantage of the system. We created Project Jengo in an effort to remove this economic asymmetry from the litigation. In our initial blog post we suggested we could level the playing field by: (i) defending ourselves vigorously against the patent lawsuit instead of rolling over and paying a licensing fee or settling, (ii) funding awards for crowdsourced prior art that could be used to invalidate any of Blackbird’s patents, not just the one asserted against Cloudflare, and (iii) asking the relevant bar associations to investigate what we considered to be Blackbird’s violations of the rules of professional conduct for attorneys.

How’d we do?

The Lawsuit

As promised, we fought the lawsuit vigorously. And as explained in a blog post earlier this year, we won as convincing a victory as one could in federal litigation at both the trial and appellate levels. In early 2018, the District Court for the Northern District of California dismissed the case Blackbird brought against us on subject matter eligibility grounds in response to an Alice motion. In a mere two-page order, Judge Vince Chhabria held that “[a]bstract ideas are not patentable” and Blackbird’s assertion of the patent “attempts to monopolize the abstract idea of monitoring a preexisting data stream between a server and a client.” Essentially, the case was rejected before it ever really started because the court found Blackbird’s patent to be invalid.

Blackbird appealed that decision to the Court of Appeals for the Federal Circuit, which unceremoniously affirmed the lower court decision dismissing the appeal just three days after the appellate argument was heard. Following this ruling, we celebrated.  

As noted in our earlier blog post, although we won the litigation as quickly and easily as possible, the federal litigation process still lasted nearly two years, involved combined legal filings of more than 1,500 pages, and ran up considerable legal expenses. Blackbird’s right to seek review of the decision by the US Supreme Court expired this summer, so the case is now officially over. As we’ve said from the start, we only intended to pursue Project Jengo as long as the case remained active.  

Even though we won decisively in court, that alone is not enough to change the incentive structure around patent troll suits. Patent trolls are repeat players who don’t have significant operations, so the costs of litigation and discovery are much less for them.

Funding Crowdsourced Prior Art to Invalidate Blackbird Patents

Prior Art

An integral part of our strategy against Blackbird was to engage our community to help us locate prior art that we could use to invalidate all of Blackbird’s patents. One of the most powerful legal arguments against the validity of a patent is that the invention claimed in the patent was already known or made public somewhere else (“prior art”). A collection of prior art on all the Blackbird patents could be used by anyone facing a lawsuit from Blackbird to defend themselves. The existence of an organized and accessible library of prior art would diminish the overall value of the Blackbird patent portfolio. That sort of risk to the patent portfolio was the kind of thing that would nudge the incentive structure in the other direction. Although the financial incentives made possible by the US legal system may support patent trolls, we knew our secret weapon was a very smart, very motivated community that loathed the extortionary activities of patent trolls and wanted to fight back.

And boy, were we right! We established a prior art bounty to pay cash rewards for prior art submissions that read on the patent Blackbird asserted against Cloudflare, as well as any of Blackbird’s other patents.  

We received hundreds of submissions across Blackbird’s portfolio of patents. We were very impressed with the quality of those submissions and think they call the validity of a number of those patents into question. All the relevant submissions we collected can be found here sorted by patent number, and we hope they are put to good use by other parties sued by Blackbird. Additionally, we’ve already forwarded prior art from the collection to a handful of companies and organizations that reached out to us because they were facing cases from Blackbird.

A high-level breakdown of the submissions:

  • We received 275 total unique submissions from 155 individuals on 49 separate patents, and we received multiple submissions on 26 patents.
  • 40.1% of the total submissions related to the ’335 patent asserted against Cloudflare.
  • The second highest concentration of prior art submissions (14.9% of total) relate to PUB20140200078 titled “Video Game Including User Determined Location Information.” The vast majority of these submissions note the similarity between the patent’s claims and the Niantic game Ingress.

A few interesting examples of prior art that were submitted that we think are particularly damaging to some of the Blackbird patents:

  • Internet based resource retrieval system (No. 8996546)
    The first two sentences of this 2004 patent’s abstract summarize the patent as a “resource retrieval system compris[ing] a server having a searchable database wherein users can readily access region-based publications similar to, but not necessarily limited to, printed telephone directories. The resource retrieval system communicates with at least one user system, preferably via the Internet.”

    The Project Jengo community reviewed the incredibly broad language in the patent claims and submitted a reference to an online phone book that allowed for the searching of local results from an online AT&T database. The submission is a link to an archive of a webpage from the year 2000, potentially calling into question the Blackbird patent on eligibility grounds.

  • Illuminated product packaging (No. 7086751)
    This patent seeks protection for packaging “intended to hold a product for sale. The product package includes one or more light sources disposed therein and configured to direct light through one or more openings in the exterior of the product package, in order to entice customers to purchase the product.”

    In one of the more interesting Project Jengo submissions we received, the following information was provided: The CD packaging for Pink Floyd’s ‘Pulse’ included a blinking LED within the cardboard box that was active and visible on store shelves. We felt that this also spoke to the heart of this broad and seemingly obvious patented product.

  • Sports Bra (No. 7867058)
    This Blackbird patent involves a “sports bra having an integral storage pouch.”

    The Project Jengo community found that a submission on a public discussion forum that pre-dates the ’058 patent and disclosed an idea of modifying a bra by creating an incision in the inner lining and applying a velcro strip so as to form a resealable pocket within the bra… Or essentially the same invention.  

As a Bonus – an Ex Parte Victory

Almost immediately after we announced Jengo, we received an anonymous donation from someone who shared our frustration with patent trolls. As we announced, this gift allowed us to expand Jengo by using some of the prior art to directly challenge other Blackbird patents in administrative proceedings.

We initiated an administrative challenge against Blackbird Patent 7,797,448 (“GPS-internet Linkage”). The patent describes in broad and generic terms “[a]n integrated system comprising the Global Positioning System and the Internet wherein the integrated system can identify the precise geographic location of both sender and receiver communicating computer terminals.” You don’t have to be particularly technical to realize how largely obvious and widely applicable such a concept would be, as many modern Internet applications attempt to integrate some sort of location services using GPS. This was a dangerous patent in the hands of a patent troll.

Based on the strength of the prior art we received from the Project Jengo community and the number of times Blackbird had asserted the ’448 Patent to elicit a settlement from startups, we filed for an ex parte reexamination (EPR) of the ’448 Patent by the US Patent & Trademark Office (USPTO). The EPR is an administrative proceeding that can be used to challenge obviously deficient patents in a less complex, lengthy, or costly exercise than federal litigation.

We submitted our EPR challenge in November 2017. Blackbird responded to the ex parte by attempting to amend their patent’s claims to make them more narrow in an effort to make their patent more defensible and avoid the challenge. In March 2018, the USPTO issued a Non-Final Office Action that proposed rejecting the ’448 Patent’s claims altogether because the claims were found to be preempted by prior art submitted by Project Jengo. Blackbird did not respond to the Office Action. And a few months later, in August 2018, the USPTO issued a final order in line with the office action, which cancelled the ’448 Patent’s claims. The USPTO’s decision means the ‘448 patent is invalid and no one can assert the incredibly broad terms of the ‘448 patent again.

Rewarding the Crowd

As promised, Cloudflare distributed more than $50,000 in cash awards to eighteen people who submitted prior art as part of the crowdsourced effort. We gave out more than $25,000 to people in support of their submissions related to the ’335 patent asserted against Cloudflare. Additionally we awarded more than $30,000 to submitters in support of our efforts to invalidate the other patents in Blackbird’s portfolio.

In general, we awarded bounties based on whether we incorporated the art found by the community into our legal filings, the analysis of the art as provided in the submission, whether someone else had previously submitted the art, and the strength and number of claims the art challenged in the specified Blackbird patent.

We asked many of the recent bounty winners why they decided to submit prior art to Project Jengo and received some of the following responses:  

"Over the years I've been disappointed and angered by a number of patent cases where I feel that the patent system has been abused by so-called ‘patent trolls’ in order to stifle innovation and profit from litigation. With Jengo in particular, I was a fan of what Cloudflare had done previously with Universal SSL. When the opportunity arose to potentially make a difference with a real patent troll case, I was happy to try and help."

Adam, Security Engineer

"I read the ’335 patent and thought it basically described a fundamental design principle of the world wide web (proxy servers). I was pretty sure such software was in widespread use by the priority date of the patent (1998). At that point I was curious if that was true so I did some Googling."

David, Software Developer

"Personally, I believe the vast majority of software patents are obvious and trivial. They should have never been granted. At the same time, fighting a patent claim is costly and time consuming regardless of the patent’s merit, while filing the claim is relatively cheap. Patent trolls exploit this imbalance and, in turn, they stifle innovation. Project Jengo was a great opportunity to use my knowledge of prior academic work for a good cause."

Kevin, Postdoctoral Research Scientist

"I'm pretty excited, I've never won a single thing in my life before. And to do it in service of taking down evil patent trolls? This is one of the best days of my life, no joke. I submitted because software patents are garbage and clearly designed to extort money from productive innovators for vague and obvious claims. Also, I was homeless at the time I submitted and was spending all day at the library anyway."

Garrett, San Francisco

What was the Impact?

The whole point of Project Jengo was to flip the incentive structure around patent trolls, who assume they can buy broad patents, spend a little money to initiate litigation, and then sit back and expect that a great percentage of defendants will send them a check. Under a proper incentive structure, they should have to expend some effort to prove their claims have merit, and we wanted to make available information that would support other potential defendants who may want to push back against claims under Blackbird patents.

One very simple measure of the impact is to review the number of new lawsuits Blackbird is bringing with its patent portfolio, which is a public record. So what does Blackbird’s activity look like on that point?

The Project Jengo Saga: How Cloudflare Stood up to a Patent Troll – and Won!

In the one-year period immediately preceding Project Jengo, (Q2’16-Q2’17) Blackbird filed more than 65 cases. Since Project Jengo launched more than 2.5 years ago, the number of cases Blackbird has filed has fallen to an average rate of 10 per year.  

Not only are they filing fewer cases, but Blackbird as an organization seems to be operating with fewer resources than they did at their peak. When we launched Project Jengo in May 2017, the Blackbird website identified a total team of 12: six lawyers, including two co-founders, four litigation counsel, as well as a patent analysis group of 6. Today, based on a review of the website and LinkedIn, it appears only three staff remain: one co-founder, one litigation counsel, and one member of the patent analysis group.  

Ethics Complaints (section submitted by Cloudflare’s General Counsel, Doug Kramer)

We filed ethics complaints against both of Blackbird’s co-founders before the bar associations in Massachusetts, Illinois, and the USPTO based on their self-described “new model” of pursuing intellectual property claims. Our complaints were based on rules of professional conduct prohibiting lawyers from acquiring a cause of action to assert on their own behalf, or in the alternative, rules prohibiting attorneys to split contingency fees with a non-attorney.

We did not file such complaints lightly, as we take ethical standards seriously and don’t think such proceedings should be used merely to harass. In this case, we think the public perception of patent trolls, who are seen as lawyers chasing an easy buck by taking advantage of distortions in the litigation process, has damaged the public perception of attorneys and respect for the legal profession--the exact sort of values the ethical rules and bar associations are meant to protect.

We based our complaints on the assignment agreement we found filed with the USPTO, where Blackbird purchased the ’335 patent from an inventor in October 2016 for $1. It seemed apparent that the actual but undisclosed compensation between the parties was considerably more than $1, so Blackbird may have simply acquired the cause of action or the agreement involved an arrangement where Blackbird would split a portion of any recovered fees with the inventor. Such agreements are generally prohibited by the ethical rules.

In public statements, Blackbird’s defense to these allegations was that it (i) was not a law firm (despite the fact it is led exclusively by lawyers who are actively engaged in the litigation it pursues) and (ii) does not use contingency fee arrangements for the patents it acquires, but does use something “similar.” Both defenses were rather surprising to us. Isn’t an organization led and staffed exclusively by lawyers who are drafting complaints, filing papers with courts, and arguing before judges amount to a “law firm”? In fact, we found pleadings in other Blackbird cases where the Blackbird leadership asked to be treated as lawyers so they could have access to sensitive technical evidence in those cases that is usually off-limits to anyone but the lawyers. And what does it mean for an agreement to be merely “similar” to a contingency agreement?

The disciplinary proceedings in front of bar associations are generally confidential, so we are limited in our ability to report out developments in those cases. But regardless of the outcome, we’ve only approached bar associations in two states. Getting this back on the right track will require more than successful adjudications in front of such committees. Instead, it will take a broader change in orientation by these professional associations across the country to view such matters as more than mere political disputes or arguments between active litigants.  

Our questions go to the very heart of ensuring an ethical legal profession, they are meant to determine what safeguards should be put in place to make sure that attorneys who take the oath are held to a standard beyond mere greed or base opportunism. They go to the question of whether being an attorney is merely a job or if there are higher standards they should be held to, making sure their monopoly over the ability to bring lawsuits as officers of the court (and all the implications, costs, and power that represents) is only wielded by people who can be trusted to do so responsibly. Otherwise, what’s the point of ethical standards?

That’s all ... for now

We’ve said from the beginning that Project Jengo was a response to the patent troll litigation and we would end it as soon as the case was over. And now it is. Although we are proud of our work on this issue, we need to turn our focus back to the company’s mission -- to help build a better Internet. But we may be back at some point. Patent trolls remain a risk to growing companies like Cloudflare and nothing in this experience has persuaded us that settling a patent lawsuit is ever the right answer. We don’t plan to settle, and if brought into such litigation again at some point in the future, we think we have a pretty good blueprint for how to respond.

The Blackbird prior art will remain available here, and we remain available to consult with our colleagues at other companies who face these issues, as we have done many times over the past few years.

Finally, we would like to express our sincere gratitude to the community who researched the Blackbird patent portfolio and helped us fight this troll. It was our confidence in all of you that inspired the idea of Project Jengo in the first place, so its success belongs to you.

Thank you.  


Cloning a MAC address to bypass a captive portal [Fedora Magazine]

If you ever attach to a WiFi system outside your home or office, you often see a portal page. This page may ask you to accept terms of service or some other agreement to get access. But what happens when you can’t connect through this kind of portal? This article shows you how to use NetworkManager on Fedora to deal with some failure cases so you can still access the internet.

How captive portals work

Captive portals are web pages offered when a new device is connected to a network. When the user first accesses the Internet, the portal captures all web page requests and redirects them to a single portal page.

The page then asks the user to take some action, typically agreeing to a usage policy. Once the user agrees, they may authenticate to a RADIUS or other type of authentication system. In simple terms, the captive portal registers and authorizes a device based on the device’s MAC address and end user acceptance of terms. (The MAC address is a hardware-based value attached to any network interface, like a WiFi chip or card.)

Sometimes a device doesn’t load the captive portal to authenticate and authorize the device to use the location’s WiFi access. Examples of this situation include mobile devices and gaming consoles (Switch, Playstation, etc.). They usually won’t launch a captive portal page when connecting to the Internet. You may see this situation when connecting to hotel or public WiFi access points.

You can use NetworkManager on Fedora to resolve these issues, though. Fedora will let you temporarily clone the connecting device’s MAC address and authenticate to the captive portal on the device’s behalf. You’ll need the MAC address of the device you want to connect. Typically this is printed somewhere on the device and labeled. It’s a six-byte hexadecimal value, so it might look like 4A:1A:4C:B0:38:1F. You can also usually find it through the device’s built-in menus.

Cloning with NetworkManager

First, open nm-connection-editor, or open the WiFI settings via the Settings applet. You can then use NetworkManager to clone as follows:

  • For Ethernet – Select the connected Ethernet connection. Then select the Ethernet tab. Note or copy the current MAC address. Enter the MAC address of the console or other device in the Cloned MAC address field.
  • For WiFi – Select the WiFi profile name. Then select the WiFi tab. Note or copy the current MAC address. Enter the MAC address of the console or other device in the Cloned MAC address field.

Bringing up the desired device

Once the Fedora system connects with the Ethernet or WiFi profile, the cloned MAC address is used to request an IP address, and the captive portal loads. Enter the credentials needed and/or select the user agreement. The MAC address will then get authorized.

Now, disconnect the WiFi or Ethernet profile, and change the Fedora system’s MAC address back to its original value. Then boot up the console or other device. The device should now be able to access the Internet, because its network interface has been authorized via your Fedora system.

This isn’t all that NetworkManager can do, though. For instance, check out this article on randomizing your system’s hardware address for better privacy.

Sunday, 03 November


Saturday Morning Breakfast Cereal - Fossils [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

According to a Lee Smolin book there are actual people who've proposed serious theories like this. Minus the one-upping Satan part.

Today's News:

Saturday, 02 November


Saturday Morning Breakfast Cereal - Frog Prince [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

The really gross part is when she puts her lips to the roasted frog to eat it and it turns into Prince Charming.

Today's News:

Book singing in Charlottesville today!

Friday, 01 November


Saturday Morning Breakfast Cereal - Tangled [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

Also I haven't bathed in 3 years, so roll that into your calculations while you're climbing my hair.

Today's News:


Going Keyless Everywhere [The Cloudflare Blog]

Going Keyless Everywhere
Going Keyless Everywhere

Time flies. The Heartbleed vulnerability was discovered just over five and a half years ago. Heartbleed became a household name not only because it was one of the first bugs with its own web page and logo, but because of what it revealed about the fragility of the Internet as a whole. With Heartbleed, one tiny bug in a cryptography library exposed the personal data of the users of almost every website online.

Heartbleed is an example of an underappreciated class of bugs: remote memory disclosure vulnerabilities. High profile examples other than Heartbleed include Cloudbleed and most recently NetSpectre. These vulnerabilities allow attackers to extract secrets from servers by simply sending them specially-crafted packets. Cloudflare recently completed a multi-year project to make our platform more resilient against this category of bug.

For the last five years, the industry has been dealing with the consequences of the design that led to Heartbleed being so impactful. In this blog post we’ll dig into memory safety, and how we re-designed Cloudflare’s main product to protect private keys from the next Heartbleed.

Memory Disclosure

Perfect security is not possible for businesses with an online component. History has shown us that no matter how robust their security program, an unexpected exploit can leave a company exposed. One of the more famous recent incidents of this sort is Heartbleed, a vulnerability in a commonly used cryptography library called OpenSSL that exposed the inner details of millions of web servers to anyone with a connection to the Internet. Heartbleed made international news, caused millions of dollars of damage, and still hasn’t been fully resolved.

Typical web services only return data via well-defined public-facing interfaces called APIs. Clients don’t typically get to see what’s going on under the hood inside the server, that would be a huge privacy and security risk. Heartbleed broke that paradigm: it enabled anyone on the Internet to get access to take a peek at the operating memory used by web servers, revealing privileged data usually not exposed via the API. Heartbleed could be used to extract the result of previous data sent to the server, including passwords and credit cards. It could also reveal the inner workings and cryptographic secrets used inside the server, including TLS certificate private keys.

Heartbleed let attackers peek behind the curtain, but not too far. Sensitive data could be extracted, but not everything on the server was at risk. For example, Heartbleed did not enable attackers to steal the content of databases held on the server. You may ask: why was some data at risk but not others? The reason has to do with how modern operating systems are built.

A simplified view of process isolation

Most modern operating systems are split into multiple layers. These layers are analogous to security clearance levels. So-called user-space applications (like your browser) typically live in a low-security layer called user space. They only have access to computing resources (memory, CPU, networking) if the lower, more credentialed layers let them.

User-space applications need resources to function. For example, they need memory to store their code and working memory to do computations. However, it would be risky to give an application direct access to the physical RAM of the computer they’re running on. Instead, the raw computing elements are restricted to a lower layer called the operating system kernel. The kernel only runs specially-designed applications designed to safely manage these resources and mediate access to them for user-space applications.

When a new user space application process is launched, the kernel gives it a virtual memory space. This virtual memory space acts like real memory to the application but is actually a safely guarded translation layer the kernel uses to protect the real memory. Each application’s virtual memory space is like a parallel universe dedicated to that application. This makes it impossible for one process to view or modify another’s, the other applications are simply not addressable.

Going Keyless Everywhere

Heartbleed, Cloudbleed and the process boundary

Heartbleed was a vulnerability in the OpenSSL library, which was part of many web server applications. These web servers run in user space, like any common applications. This vulnerability caused the web server to return up to 2 kilobytes of its memory in response to a specially-crafted inbound request.

Cloudbleed was also a memory disclosure bug, albeit one specific to Cloudflare, that got its name because it was so similar to Heartbleed. With Cloudbleed, the vulnerability was not in OpenSSL, but instead in a secondary web server application used for HTML parsing. When this code parsed a certain sequence of HTML, it ended up inserting some process memory into the web page it was serving.

Going Keyless Everywhere

It’s important to note that both of these bugs occurred in applications running in user space, not kernel space. This means that the memory exposed by the bug was necessarily part of the virtual memory of the application. Even if the bug were to expose megabytes of data, it would only expose data specific to that application, not other applications on the system.

In order for a web server to serve traffic over the encrypted HTTPS protocol, it needs access to the certificate’s private key, which is typically kept in the application’s memory. These keys were exposed to the Internet by Heartbleed. The Cloudbleed vulnerability affected a different process, the HTML parser, which doesn’t do HTTPS and therefore doesn’t keep the private key in memory. This meant that HTTPS keys were safe, even if other data in the HTML parser’s memory space wasn’t.

Going Keyless Everywhere

The fact that the HTML parser and the web server were different applications saved us from having to revoke and re-issue our customers’ TLS certificates. However, if another memory disclosure vulnerability is discovered in the web server, these keys are again at risk.

Moving keys out of Internet-facing processes

Not all web servers keep private keys in memory. In some deployments, private keys are held in a separate machine called a Hardware Security Module (HSM). HSMs are built to withstand physical intrusion and tampering and are often built to comply with stringent compliance requirements. They can often be bulky and expensive. Web servers designed to take advantage of keys in an HSM connect to them over a physical cable and communicate with a specialized protocol called PKCS#11. This allows the web server to serve encrypted content while being physically separated from the private key.

Going Keyless Everywhere

At Cloudflare, we built our own way to separate a web server from a private key: Keyless SSL. Rather than keeping the keys in a separate physical machine connected to the server with a cable, the keys are kept in a key server operated by the customer in their own infrastructure (this can also be backed by an HSM).

Going Keyless Everywhere

More recently, we launched Geo Key Manager, a service that allows users to store private keys in only select Cloudflare locations. Connections to locations that do not have access to the private key use Keyless SSL with a key server hosted in a datacenter that does have access.

In both Keyless SSL and Geo Key Manager, private keys are not only not part of the web server’s memory space, they’re often not even in the same country! This extreme degree of separation is not necessary to protect against the next Heartbleed. All that is needed is for the web server and the key server to not be part of the same application. So that’s what we did. We call this Keyless Everywhere.

Going Keyless Everywhere

Keyless SSL is coming from inside the house

Repurposing Keyless SSL for Cloudflare-held private keys was easy to conceptualize, but the path from ideation to live in production wasn't so straightforward. The core functionality of Keyless SSL comes from the open source gokeyless which customers run on their infrastructure, but internally we use it as a library and have replaced the main package with an implementation suited to our requirements (we've creatively dubbed it gokeyless-internal).

As with all major architecture changes, it’s prudent to start with testing out the model with something new and low risk. In our case, the test bed was our experimental TLS 1.3 implementation. In order to quickly iterate through draft versions of the TLS specification and push releases without affecting the majority of Cloudflare customers, we re-wrote our custom nginx web server in Go and deployed it in parallel to our existing infrastructure. This server was designed to never hold private keys from the start and only leverage gokeyless-internal. At this time there was only a small amount of TLS 1.3 traffic and it was all coming from the beta versions of browsers, which allowed us to work through the initial kinks of gokeyless-internal without exposing the majority of visitors to security risks or outages due to gokeyless-internal.

The first step towards making TLS 1.3 fully keyless was identifying and implementing the new functionality we needed to add to gokeyless-internal. Keyless SSL was designed to run on customer infrastructure, with the expectation of supporting only a handful of private keys. But our edge must simultaneously support millions of private keys, so we implemented the same lazy loading logic we use in our web server, nginx. Furthermore, a typical customer deployment would put key servers behind a network load balancer, so they could be taken out of service for upgrades or other maintenance. Contrast this with our edge, where it’s important to maximize our resources by serving traffic during software upgrades. This problem is solved by the excellent tableflip package we use elsewhere at Cloudflare.

The next project to go Keyless was Spectrum, which launched with default support for gokeyless-internal. With these small victories in hand, we had the confidence necessary to attempt the big challenge, which was porting our existing nginx infrastructure to a fully keyless model. After implementing the new functionality, and being satisfied with our integration tests, all that’s left is to turn this on in production and call it a day, right? Anyone with experience with large distributed systems knows how far "working in dev" is from "done," and this story is no different. Thankfully we were anticipating problems, and built a fallback into nginx to complete the handshake itself if any problems were encountered with the gokeyless-internal path. This allowed us to expose gokeyless-internal to production traffic without risking downtime in the event that our reimplementation of the nginx logic was not 100% bug-free.

When rolling back the code doesn’t roll back the problem

Our deployment plan was to enable Keyless Everywhere, find the most common causes of fallbacks, and then fix them. We could then repeat this process until all sources of fallbacks had been eliminated, after which we could remove access to private keys (and therefore the fallback) from nginx. One of the early causes of fallbacks was gokeyless-internal returning ErrKeyNotFound, indicating that it couldn’t find the requested private key in storage. This should not have been possible, since nginx only makes a request to gokeyless-internal after first finding the certificate and key pair in storage, and we always write the private key and certificate together. It turned out that in addition to returning the error for the intended case of the key truly not found, we were also returning it when transient errors like timeouts were encountered. To resolve this, we updated those transient error conditions to return ErrInternal, and deployed to our canary datacenters. Strangely, we found that a handful of instances in a single datacenter started encountering high rates of fallbacks, and the logs from nginx indicated it was due to a timeout between nginx and gokeyless-internal. The timeouts didn’t occur right away, but once a system started logging some timeouts it never stopped. Even after we rolled back the release, the fallbacks continued with the old version of the software! Furthermore, while nginx was complaining about timeouts, gokeyless-internal seemed perfectly healthy and was reporting reasonable performance metrics (sub-millisecond median request latency).

Going Keyless Everywhere

To debug the issue, we added detailed logging to both nginx and gokeyless, and followed the chain of events backwards once timeouts were encountered.

➜ ~ grep 'timed out' nginx.log | grep Keyless | head -5
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015157 Keyless SSL request/response timed out while reading Keyless SSL response, keyserver:
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015231 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver:
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015271 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver:
2018-07-25T05:30:49.000 29m41 2018/07/25 05:30:49 [error] 4525#0: *1015280 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver:
2018-07-25T05:30:50.000 29m41 2018/07/25 05:30:50 [error] 4525#0: *1015289 Keyless SSL request/response timed out while waiting for Keyless SSL response, keyserver:

You can see the first request to log a timeout had id 1015157. Also interesting that the first log line was "timed out while reading," but all the others are "timed out while waiting," and this latter message is the one that continues forever. Here is the matching request in the gokeyless log:

➜ ~ grep 'id=1015157 ' gokeyless.log | head -1
2018-07-25T05:30:39.000 29m41 2018/07/25 05:30:39 [DEBUG] connection worker=ecdsa-29 opcode=OpECDSASignSHA256 id=1015157 sni=announce.php?info_hash=%a8%9e%9dc%cc%3b1%c8%23%e4%93%21r%0f%92mc%0c%15%89&peer_id=-ut353s-%ce%ad%5e%b1%99%06%24e%d5d%9a%08&port=42596&uploaded=65536&downloaded=0&left=0&corrupt=0&key=04a184b7&event=started&numwant=200&compact=1&no_peer_id=1 ip=

Aha! That SNI value is clearly invalid (SNIs are like Host headers, i.e. they are domains, not URL paths), and it’s also quite long. Our storage system indexes certificates based on two indices: which SNI they correspond to, and which IP addresses they correspond to (for older clients that don’t support SNI). Our storage interface uses the memcached protocol, and the client library that gokeyless-internal uses rejects requests for keys longer than 250 characters (memcached’s maximum key length), whereas the nginx logic is to simply ignore the invalid SNI and treat the request as if only had an IP. The change in our new release had shifted this condition from ErrKeyNotFound to ErrInternal, which triggered cascading problems in nginx. The “timeouts” it encountered were actually a result of throwing away all in-flight requests multiplexed on a connection which happened to return ErrInternalfor a single request. These requests were retried, but once this condition triggered, nginx became overloaded by the number of retried requests plus the continuous stream of new requests coming in with bad SNI, and was unable to recover. This explains why rolling back gokeyless-internal didn’t fix the problem.

This discovery finally brought our attention to nginx, which thus far had escaped blame since it had been working reliably with customer key servers for years. However, communicating over localhost to a multitenant key server is fundamentally different than reaching out over the public Internet to communicate with a customer’s key server, and we had to make the following changes:

  • Instead of a long connection timeout and a relatively short response timeout for customer key servers, extremely short connection timeouts and longer request timeouts are appropriate for a localhost key server.
  • Similarly, it’s reasonable to retry (with backoff) if we timeout waiting on a customer key server response, since we can’t trust the network. But over localhost, a timeout would only occur if gokeyless-internal were overloaded and the request were still queued for processing. In this case a retry would only lead to more total work being requested of gokeyless-internal, making the situation worse.
  • Most significantly, nginx must not throw away all requests multiplexed on a connection if any single one of them encounters an error, since a single connection no longer represents a single customer.

Implementations matter

CPU at the edge is one of our most precious assets, and it’s closely guarded by our performance team (aka CPU police). Soon after turning on Keyless Everywhere in one of our canary datacenters, they noticed gokeyless using ~50% of a core per instance. We were shifting the sign operations from nginx to gokeyless, so of course it would be using more CPU now. But nginx should have seen a commensurate reduction in CPU usage, right?

Going Keyless Everywhere

Wrong. Elliptic curve operations are very fast in Go, but it’s known that RSA operations are much slower than their BoringSSL counterparts.

Although Go 1.11 includes optimizations for RSA math operations, we needed more speed. Well-tuned assembly code is required to match the performance of BoringSSL, so Armando Faz from our Crypto team helped claw back some of the lost CPU by reimplementing parts of the math/big package with platform-dependent assembly in an internal fork of Go. The recent assembly policy of Go prefers the use of Go portable code instead of assembly, so these optimizations were not upstreamed. There is still room for more optimizations, and for that reason we’re still evaluating moving to cgo + BoringSSL for sign operations, despite cgo’s many downsides.

Changing our tooling

Process isolation is a powerful tool for protecting secrets in memory. Our move to Keyless Everywhere demonstrates that this is not a simple tool to leverage. Re-architecting an existing system such as nginx to use process isolation to protect secrets was time-consuming and difficult. Another approach to memory safety is to use a memory-safe language such as Rust.

Rust was originally developed by Mozilla but is starting to be used much more widely. The main advantage that Rust has over C/C++ is that it has memory safety features without a garbage collector.

Re-writing an existing application in a new language such as Rust is a daunting task. That said, many new Cloudflare features, from the powerful Firewall Rules feature to our with WARP app, have been written in Rust to take advantage of its powerful memory-safety properties. We’re really happy with Rust so far and plan on using it even more in the future.


The harrowing aftermath of Heartbleed taught the industry a lesson that should have been obvious in retrospect: keeping important secrets in applications that can be accessed remotely via the Internet is a risky security practice. In the following years, with a lot of work, we leveraged process separation and Keyless SSL to ensure that the next Heartbleed wouldn’t put customer keys at risk.

However, this is not the end of the road. Recently memory disclosure vulnerabilities such as NetSpectre have been discovered which are able to bypass application process boundaries, so we continue to actively explore new ways to keep keys secure.

Going Keyless Everywhere


Delegated Credentials for TLS [The Cloudflare Blog]

Delegated Credentials for TLS
Delegated Credentials for TLS

Today we’re happy to announce support for a new cryptographic protocol that helps make it possible to deploy encrypted services in a global network while still maintaining fast performance and tight control of private keys: Delegated Credentials for TLS. We have been working with partners from Facebook, Mozilla, and the broader IETF community to define this emerging standard. We’re excited to share the gory details today in this blog post.

Also, be sure to check out the blog posts on the topic by our friends at Facebook and Mozilla!

Deploying TLS globally

Many of the technical problems we face at Cloudflare are widely shared problems across the Internet industry. As gratifying as it can be to solve a problem for ourselves and our customers, it can be even more gratifying to solve a problem for the entire Internet. For the past three years, we have been working with peers in the industry to solve a specific shared problem in the TLS infrastructure space: How do you terminate TLS connections while storing keys remotely and maintaining performance and availability? Today we’re announcing that Cloudflare now supports Delegated Credentials, the result of this work.

Cloudflare’s TLS/SSL features are among the top reasons customers use our service. Configuring TLS is hard to do without internal expertise. By automating TLS, web site and web service operators gain the latest TLS features and the most secure configurations by default. It also reduces the risk of outages or bad press due to misconfigured or insecure encryption settings. Customers also gain early access to unique features like TLS 1.3, post-quantum cryptography, and OCSP stapling as they become available.

Unfortunately, for web services to authorize a service to terminate TLS for them, they have to trust the service with their private keys, which demands a high level of trust. For services with a global footprint, there is an additional level of nuance. They may operate multiple data centers located in places with varying levels of physical security, and each of these needs to be trusted to terminate TLS.

To tackle these problems of trust, Cloudflare has invested in two technologies: Keyless SSL, which allows customers to use Cloudflare without sharing their private key with Cloudflare; and Geo Key Manager, which allows customers to choose the geographical locations in which Cloudflare should keep their keys. Both of these technologies are able to be deployed without any changes to browsers or other clients. They also come with some downsides in the form of availability and performance degradation.

Keyless SSL introduces extra latency at the start of a connection. In order for a server without access to a private key to establish a connection with a client, that servers needs to reach out to a key server, or a remote point of presence, and ask them to do a private key operation. This not only adds additional latency to the connection, causing the content to load slower, but it also introduces some troublesome operational constraints on the customer. Specifically, the server with access to the key needs to be highly available or the connection can fail. Sites often use Cloudflare to improve their site’s availability, so having to run a high-availability key server is an unwelcome requirement.

Turning a pull into a push

The reason services like Keyless SSL that rely on remote keys are so brittle is their architecture: they are pull-based rather than push-based. Every time a client attempts a handshake with a server that doesn’t have the key, it needs to pull the authorization from the key server. An alternative way to build this sort of system is to periodically push a short-lived authorization key to the server and use that for handshakes. Switching from a pull-based model to a push-based model eliminates the additional latency, but it comes with additional requirements, including the need to change the client.

Enter the new TLS feature of Delegated Credentials (DCs). A delegated credential is a short-lasting key that the certificate’s owner has delegated for use in TLS. They work like a power of attorney: your server authorizes our server to terminate TLS for a limited time. When a browser that supports this protocol connects to our edge servers we can show it this “power of attorney”, instead of needing to reach back to a customer’s server to get it to authorize the TLS connection. This reduces latency and improves performance and reliability.

Delegated Credentials for TLS
The pull model
Delegated Credentials for TLS
The push model

A fresh delegated credential can be created and pushed out to TLS servers long before the previous credential expires. Momentary blips in availability will not lead to broken handshakes for clients that support delegated credentials. Furthermore, a Delegated Credentials-enabled TLS connection is just as fast as a standard TLS connection: there’s no need to connect to the key server for every handshake. This removes the main drawback of Keyless SSL for DC-enabled clients.

Delegated credentials are intended to be an Internet Standard RFC that anyone can implement and use, not a replacement for Keyless SSL. Since browsers will need to be updated to support the standard, proprietary mechanisms like Keyless SSL and Geo Key Manager will continue to be useful. Delegated credentials aren’t just useful in our context, which is why we’ve developed it openly and with contributions from across industry and academia. Facebook has integrated them into their own TLS implementation, and you can read more about how they view the security benefits here.  When it comes to improving the security of the Internet, we’re all on the same team.

"We believe delegated credentials provide an effective way to boost security by reducing certificate lifetimes without sacrificing reliability. This will soon become an Internet standard and we hope others in the industry adopt delegated credentials to help make the Internet ecosystem more secure."

Subodh Iyengar, software engineer at Facebook

Extensibility beyond the PKI

At Cloudflare, we’re interested in pushing the state of the art forward by experimenting with new algorithms. In TLS, there are three main areas of experimentation: ciphers, key exchange algorithms, and authentication algorithms. Ciphers and key exchange algorithms are only dependent on two parties: the client and the server. This freedom allows us to deploy exciting new choices like ChaCha20-Poly1305 or post-quantum key agreement in lockstep with browsers. On the other hand, the authentication algorithms used in TLS are dependent on certificates, which introduces certificate authorities and the entire public key infrastructure into the mix.

Unfortunately, the public key infrastructure is very conservative in its choice of algorithms, making it harder to adopt newer cryptography for authentication algorithms in TLS. For instance, EdDSA, a highly-regarded signature scheme, is not supported by certificate authorities, and root programs limit the certificates that will be signed. With the emergence of quantum computing, experimenting with new algorithms is essential to determine which solutions are deployable and functional on the Internet.

Since delegated credentials introduce the ability to use new authentication key types without requiring changes to certificates themselves, this opens up a new area of experimentation. Delegated credentials can be used to provide a level of flexibility in the transition to post-quantum cryptography, by enabling new algorithms and modes of operation to coexist with the existing PKI infrastructure. It also enables tiny victories, like the ability to use smaller, faster Ed25519 signatures in TLS.

Inside DCs

A delegated credential contains a public key and an expiry time. This bundle is then signed by a certificate along with the certificate itself, binding the delegated credential to the certificate for which it is acting as “power of attorney”. A supporting client indicates its support for delegated credentials by including an extension in its Client Hello.

A server that supports delegated credentials composes the TLS Certificate Verify and Certificate messages as usual, but instead of signing with the certificate’s private key, it includes the certificate along with the DC, and signs with the DC’s private key. Therefore, the private key of the certificate only needs to be used for the signing of the DC.

Certificates used for signing delegated credentials require a special X.509 certificate extension (currently only available at DigiCert). This requirement exists to avoid breaking assumptions people may have about the impact of temporary access to their keys on security, particularly in cases involving HSMs and the still unfixed Bleichbacher oracles in older TLS versions.  Temporary access to a key can enable signing lots of delegated credentials which start far in the future, and as a result support was made opt-in. Early versions of QUIC had similar issues, and ended up adopting TLS to fix them. Protocol evolution on the Internet requires working well with already existing protocols and their flaws.

Delegated Credentials at Cloudflare and Beyond

Currently we use delegated credentials as a performance optimization for Geo Key Manager and Keyless SSL. Customers can update their certificates to include the special extension for delegated credentials, and we will automatically create delegated credentials and distribute them to the edge through the Keyless SSL or Geo Key Manager. For more information, see the documentation. It also enables us to be more conservative about where we keep keys for customers, improving our security posture.

Delegated Credentials would be useless if it wasn’t also supported by browsers and other HTTP clients. Christopher Patton, a former intern at Cloudflare, implemented support in Firefox and its underlying NSS security library. This feature is now in the Nightly versions of Firefox. You can turn it on by activating the configuration option security.tls.enable_delegated_credentials at about:config. Studies are ongoing on how effective this will be in a wider deployment. There also is support for Delegated Credentials in BoringSSL.

"At Mozilla we welcome ideas that help to make the Web PKI more robust. The Delegated Credentials feature can help to provide secure and performant TLS connections for our users, and we're happy to work with Cloudflare to help validate this feature."

Thyla van der Merwe, Cryptography Engineering Manager at Mozilla

One open issue is the question of client clock accuracy. Until we have a wide-scale study we won’t know how many connections using delegated credentials will break because of the 24 hour time limit that is imposed.  Some clients, in particular mobile clients, may have inaccurately set clocks, the root cause of one third of all certificate errors in Chrome. Part of the way that we’re aiming to solve this problem is through standardizing and improving Roughtime, so web browsers and other services that need to validate certificates can do so independent of the client clock.

Cloudflare’s global scale means that we see connections from every corner of the world, and from many different kinds of connection and device. That reach enables us to find rare problems with the deployability of protocols. For example, our early deployment helped inform the development of the TLS 1.3 standard. As we enable developing protocols like delegated credentials, we learn about obstacles that inform and affect their future development.


As new protocols emerge, we'll continue to play a role in their development and bring their benefits to our customers. Today’s announcement of a technology that overcomes some limitations of Keyless SSL is just one example of how Cloudflare takes part in improving the Internet not just for our customers, but for everyone. During the standardization process of turning the draft into an RFC, we’ll continue to maintain our implementation and come up with new ways to apply delegated credentials.

Thursday, 31 October


Winning the Hackathon with Sourcegraph [Yelp Engineering and Product Blog]

Visualizing how code is used across the organization is a vital part of our engineers’ day-to-day workflow - and we have a *lot* of code to search through! This blog post details our journey of adopting Sourcegraph at Yelp to help our engineers maintain and dig through the tens of gigabytes of data in our git repos! Here at Yelp, we maintain hundreds of internal services and libraries that power our website and mobile apps. Examples include our mission-critical “emoji service” which helps translate and localize emojis, as well as our “homepage service” which… you guessed it, serves our venerable...


Happy Halloween! I wrote and illustrated a cheeky love story... [Sarah's Scribbles]

Happy Halloween! I wrote and illustrated a cheeky love story between a vampire and a werewolf. It’s on Tapas exclusively, five episodes are available now and it will update twice weekly.
Check it out here!


Saturday Morning Breakfast Cereal - Odds [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

I probably should've mentioned that this is Intersection of Sex and Probability Week.

Today's News:

Wow! The new book has been top 50 on amazon for 48 hours now. Thanks so much, geeks!


Announcing cfnts: Cloudflare's implementation of NTS in Rust [The Cloudflare Blog]

Announcing cfnts: Cloudflare's implementation of NTS in Rust
Announcing cfnts: Cloudflare's implementation of NTS in Rust

Several months ago we announced that we were providing a new public time service. Part of what we were providing was the first major deployment of the new Network Time Security (NTS) protocol, with a newly written implementation of NTS in Rust. In the process, we received helpful advice from the NTP community, especially from the NTPSec and Chrony projects. We’ve also participated in several interoperability events. Now we are returning something to the community: Our implementation, cfnts, is now open source and we welcome your pull requests and issues.

The journey from a blank source file to a working, deployed service was a lengthy one, and it involved many people across multiple teams.

"Correct time is a necessity for most security protocols in use on the Internet. Despite this, secure time transfer over the Internet has previously required complicated configuration on a case by case basis. With the introduction of NTS, secure time synchronization will finally be available for everyone. It is a small, but important, step towards increasing security in all systems that depend on accurate time. I am happy that Cloudflare are sharing their NTS implementation. A diversity of software with NTS support is important for quick adoption of the new protocol."

Marcus Dansarie, coauthor of the NTS specification

How NTS works

NTS is structured as a suite of two sub-protocols as shown in the figure below. The first is the Network Time Security Key Exchange (NTS-KE), which is always conducted over Transport Layer Security (TLS) and handles the creation of key material and parameter negotiation for the second protocol. The second is NTPv4, the current version of the NTP protocol, which allows the client to synchronize their time from the remote server.

In order to maintain the scalability of NTPv4, it was important that the server not maintain per-client state. A very small server can serve millions of NTP clients. Maintaining this property while providing security is achieved with cookies that the server provides to the client that contain the server state.

In the first stage, the client sends a request to the NTS-KE server and gets a response via TLS. This exchange carries out a number of functions:

  • Negotiates the AEAD algorithm to be used in the second stage.
  • Negotiates the second protocol. Currently, the standard only defines how NTS works with NTPv4.
  • Negotiates the NTP server IP address and port.
  • Creates cookies for use in the second stage.
  • Creates two symmetric keys (C2S and S2C) from the TLS session via exporters.
Announcing cfnts: Cloudflare's implementation of NTS in Rust

In the second stage, the client securely synchronizes the clock with the negotiated NTP server. To synchronize securely, the client sends NTPv4 packets with four special extensions:

  • Unique Identifier Extension contains a random nonce used to prevent replay attacks.
  • NTS Cookie Extension contains one of the cookies that the client stores. Since currently only the client remembers the two AEAD keys (C2S and S2C), the server needs to use the cookie from this extension to extract the keys. Each cookie contains the keys encrypted under a secret key the server has.
  • NTS Cookie Placeholder Extension is a signal from the client to request additional cookies from the server. This extension is needed to make sure that the response is not much longer than the request to prevent amplification attacks.
  • NTS Authenticator and Encrypted Extension Fields Extension contains a ciphertext from the AEAD algorithm with C2S as a key and with the NTP header, timestamps, and all the previously mentioned extensions as associated data. Other possible extensions can be included as encrypted data within this field. Without this extension, the timestamp can be spoofed.

After getting a request, the server sends a response back to the client echoing the Unique Identifier Extension to prevent replay attacks, the NTS Cookie Extension to provide the client with more cookies, and the NTS Authenticator and Encrypted Extension Fields Extension with an AEAD ciphertext with S2C as a key. But in the server response, instead of sending the NTS Cookie Extension in plaintext, it needs to be encrypted with the AEAD to provide unlinkability of the NTP requests.

The second handshake can be repeated many times without going back to the first stage since each request and response gives the client a new cookie. The expensive public key operations in TLS are thus amortized over a large number of requests. Furthermore, specialized timekeeping devices like FPGA implementations only need to implement a few symmetric cryptographic functions and can delegate the complex TLS stack to a different device.

Why Rust?

While many of our services are written in Go, and we have considerable experience on the Crypto team with Go, a garbage collection pause in the middle of responding to an NTP packet would negatively impact accuracy. We picked Rust because of its zero-overhead and useful features.

  • Memory safety After Heartbleed, Cloudbleed, and the steady drip of vulnerabilities caused by C’s lack of memory safety, it’s clear that C is not a good choice for new software dealing with untrusted inputs. The obvious solution for memory safety is to use garbage collection, but garbage collection has a substantial runtime overhead, while Rust has less runtime overhead.
  • Non-nullability Null pointers are an edge case that is frequently not handled properly. Rust explicitly marks optionality, so all references in Rust can be safely dereferenced. The type system ensures that option types are properly handled.
  • Thread safety  Data-race prevention is another key feature of Rust. Rust’s ownership model ensures that all cross-thread accesses are synchronized by default. While not a panacea, this eliminates a major class of bugs.
  • Immutability Separating types into mutable and immutable is very important for reducing bugs. For example, in Java, when you pass an object into a function as a parameter, after the function is finished, you will never know whether the object has been mutated or not. Rust allows you to pass the object reference into the function and still be assured that the object is not mutated.
  • Error handling  Rust result types help with ensuring that operations that can produce errors are identified and a choice made about the error, even if that choice is passing it on.

While Rust provides safety with zero overhead, coding in Rust involves understanding linear types and for us a new language. In this case the importance of security and performance meant we chose Rust over a potentially easier task in Go.

Dependencies we use

Because of our scale and for DDoS protection we needed a highly scalable server. For UDP protocols without the concept of a connection, the server can respond to one packet at a time easily, but for TCP this is more complex. Originally we thought about using Tokio. However, at the time Tokio suffered from scheduler problems that had caused other teams some issues. As a result we decided to use Mio directly, basing our work on the examples in Rustls.

We decided to use Rustls over OpenSSL or BoringSSL because of the crate's consistent error codes and default support for authentication that is difficult to disable accidentally. While there are some features that are not yet supported, it got the job done for our service.

Other engineering choices

More important than our choice of programming language was our implementation strategy. A working, fully featured NTP implementation is a complicated program involving a phase-locked loop. These have a difficult reputation due to their nonlinear nature, beyond the usual complexities of closed loop control. The response of a phase lock loop to a disturbance can be estimated if the loop is locked and the disturbance small. However, lock acquisition, large disturbances, and the necessary filtering in NTP are all hard to analyze mathematically since they are not captured in the linear models applied for small scale analysis. While NTP works with the total phase, unlike the phase-locked loops of electrical engineering, there are still nonlinear elements. For NTP testing, changes to this loop requires weeks of operation to determine the performance as the loop responds very slowly.

Computer clocks are generally accurate over short periods, while networks are plagued with inconsistent delays. This demands a slow response. Changes we make to our service have taken hours to have an effect, as the clients slowly adapt to the new conditions. While RFC 5905 provides lots of details on an algorithm to adjust the clock, later implementations such as chrony have improved upon the algorithm through much more sophisticated nonlinear filters.

Rather than implement these more sophisticated algorithms, we let chrony adjust the clock of our servers, and copy the state variables in the header from chrony and adjust the dispersion and root delay according to the formulas given in the RFC. This strategy let us focus on the new protocols.


Part of what the Internet Engineering Task Force (IETF) does is organize events like hackathons where implementers of a new standard can get together and try to make their stuff work with one another. This exposes bugs and infelicities of language in the standard and the implementations. We attended the IETF 104 hackathon to develop our server and make it work with other implementations. The NTP working group members were extremely generous with their time, and during the process we uncovered a few issues relating to the exact way one has to handle ALPN with older OpenSSL versions.

At the IETF 104 in Prague we had a working client and server for NTS-KE by the end of the hackathon. This was a good amount of progress considering we started with nothing. However, without implementing NTP we didn’t actually know that our server and client were computing the right thing. That would have to wait for later rounds of testing.

Announcing cfnts: Cloudflare's implementation of NTS in Rust
Wireshark during some NTS debugging

Crypto Week

As Crypto Week 2019 approached we were busily writing code. All of the NTP protocol had to be implemented, together with the connection between the NTP and NTS-KE parts of the server. We also had to deploy processes to synchronize the ticket encrypting keys around the world and work on reconfiguring our own timing infrastructure to support this new service.

With a few weeks to go we had a working implementation, but we needed servers and clients out there to test with. But because we only support TLS 1.3 on the server, which had only just entered into OpenSSL, there were some compatibility problems.

We ended up compiling a chrony branch with NTS support and NTPsec ourselves and testing against We also tested our client against test servers set up by the chrony and NTPsec projects, in the hopes that this would expose bugs and have our implementations work nicely together. After a few lengthy days of debugging, we found out that our nonce length wasn’t exactly in accordance with the spec, which was quickly fixed. The NTPsec project was extremely helpful in this effort. Of course, this was the day that our office had a blackout, so the testing happened outside in Yerba Buena Gardens.

Announcing cfnts: Cloudflare's implementation of NTS in Rust
Yerba Buena commons. Taken by Wikipedia user Beyond My Ken. CC-BY-SA

During the deployment of, we had to open up our firewall to incoming NTP packets. Since the start of Cloudflare’s network existence and because of NTP reflection attacks we had previously closed UDP port 123 on the router. Since source port 123 is also used by clients sometimes to send NTP packets, it’s impossible for NTP servers to filter reflection attacks without parsing the contents of NTP packet, which routers have difficulty doing.  In order to protect Cloudflare infrastructure we got an entire subnet just for the time service, so it could be aggressively throttled and rerouted in case of massive DDoS attacks. This is an exceptional case: most edge services at Cloudflare run on every available IP.

Bug fixes

Shortly after the public launch, we discovered that older Windows versions shipped with NTP version 3, and our server only spoke version 4. This was easy to fix since the timestamps have not moved in NTP versions: we echo the version back and most still existing NTP version 3 clients will understand what we meant.

Also tricky was the failure of Network Time Foundation ntpd clients to expand the polling interval. It turns out that one has to echo back the client’s polling interval to have the polling interval expand. Chrony does not use the polling interval from the server, and so was not affected by this incompatibility.

Both of these issues were fixed in ways suggested by other NTP implementers who had run into these problems themselves. We thank Miroslav Lichter tremendously for telling us exactly what the problem was, and the members of the Cloudflare community who posted packet captures demonstrating these issues.

Continued improvement

The original production version of cfnts was not particularly object oriented and several contributors were just learning Rust. As a result there was quite a bit of unwrap and unnecessary mutability flying around. Much of the code was in functions even when it could profitably be attached to structures. All of this had to be restructured. Keep in mind that some of the best code running in the real-world have been written, rewritten, and sometimes rewritten again! This is actually a good thing.

As an internal project we relied on Cloudflare’s internal tooling for building, testing, and deploying code. These were replaced with tools available to everyone like Docker to ensure anyone can contribute. Our repository is integrated with Circle CI, ensuring that all contributions are automatically tested. In addition to unit tests we test the entire end to end functionality of getting a measurement of the time from a server.

The Future

NTPsec has already released support for NTS but we see very little usage. Please try turning on NTS if you use NTPsec and see how it works with  As the draft advances through the standards process the protocol will undergo an incompatible change when the identifiers are updated and assigned out of the IANA registry instead of being experimental ones, so this is very much an experiment. Note that your daemon will need TLS 1.3 support and so could require manually compiling OpenSSL and then linking against it.

We’ve also added our time service to the public NTP pool. The NTP pool is a widely used volunteer-maintained service that provides NTP servers geographically spread across the world. Unfortunately, NTS doesn’t currently work well with the pool model, so for the best security, we recommend enabling NTS and using and other NTS supporting servers.

In the future, we’re hoping that more clients support NTS, and have licensed our code liberally to enable this. We would love to hear if you incorporate it into a product and welcome contributions to make it more useful.

We’re also encouraged to see that Netnod has a production NTS service at The more time services and clients that adopt NTS, the more secure the Internet will be.


Tanya Verma and Gabbi Fisher were major contributors to the code, especially the configuration system and the client code. We’d also like to thank Gary Miller, Miroslav Lichter, and all the people at Cloudflare who set up their laptops and home machines to point to for early feedback.

Announcing cfnts: Cloudflare's implementation of NTS in Rust


Firefox tips for Fedora 31 [Fedora Magazine]

Fedora 31 Workstation comes with a Firefox backend moved from X11 to Wayland by default. That’s just another step in the ongoing effort of moving to Wayland. This affects GNOME on Wayland only. This article helps you understand some changes and extra steps you may wish to take depending on your preferences.

There is a firefox-wayland package available to activate the Wayland backend on KDE and Sway desktop environments.

The Wayland architecture is completely different than X11. The team merged various aspects of Firefox internals to the new protocol where possible. However, some X11 features are missing completely. For such cases you can install and run firefox-x11 package as a fallback.

If you want to run the Flash plugin, you must install the firefox-x11 package, since Flash requires X11 and GTK 2. Wayland also has a slightly different drag and drop behavior and strict popup window hierarchy.

Generally, if you think Firefox is not behaving like you want, try the firefox-x11 package. In this case, ideally you should report the misbehavior in Bugzilla.

The Wayland architecture comes with many benefits, and overcomes many limitations of X11. For instance, it can deliver smoother rendering and better HiDPI and screen scale support. You can also enable EGL hardware acceleration on Intel and AMD graphics cards. This decreases your power consumption and also gives you partially accelerated video playback. To enable it, navigate to about:config, and search for layers.acceleration.force-enabled. Set this option to true and restart Firefox.

Brave users may wish to try the Firefox next-generation renderer, called WebRender, written in Rust. To do that, search for gfx.webrender.enabled and gfx.webrender.all in about:config. Set them to true, then cross your fingers and restart Firefox.

But don’t worry — even if Firefox crashes at start after these experiments, you can launch it in safe mode to reset these options. Start Firefox from a terminal using the following command:

$ firefox -safe-mode

Wednesday, 30 October


Saturday Morning Breakfast Cereal - Package [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

I've also gotten reviews that say that my comics not meant for kids are, in fact, inappropriate for kids.

Today's News:

Thanks everyone! We hit #13 on all books on amazon yesterday. Hope you like your books!


The TLS Post-Quantum Experiment [The Cloudflare Blog]

The TLS Post-Quantum Experiment
The TLS Post-Quantum Experiment

In June, we announced a wide-scale post-quantum experiment with Google. We implemented two post-quantum (i.e., not yet known to be broken by quantum computers) key exchanges, integrated them into our TLS stack and deployed the implementation on our edge servers and in Chrome Canary clients. The goal of the experiment was to evaluate the performance and feasibility of deployment in TLS of two post-quantum key agreement ciphers.

In our previous blog post on post-quantum cryptography, we described differences between those two ciphers in detail. In case you didn’t have a chance to read it, we include a quick recap here. One characteristic of post-quantum key exchange algorithms is that the public keys are much larger than those used by "classical" algorithms. This will have an impact on the duration of the TLS handshake. For our experiment, we chose two algorithms: isogeny-based SIKE and lattice-based HRSS. The former has short key sizes (~330 bytes) but has a high computational cost; the latter has larger key sizes (~1100 bytes), but is a few orders of magnitude faster.

During NIST’s Second PQC Standardization Conference, Nick Sullivan presented our approach to this experiment and some initial results. Quite accurately, he compared NTRU-HRSS to an ostrich and SIKE to a turkey—one is big and fast and the other is small and slow.

The TLS Post-Quantum Experiment

Setup & Execution

We based our experiment on TLS 1.3. Cloudflare operated the server-side TLS connections and Google Chrome (Canary and Dev builds) represented the client side of the experiment. We enabled both CECPQ2 (HRSS + X25519) and CECPQ2b (SIKE/p434 + X25519) key-agreement algorithms on all TLS-terminating edge servers. Since the post-quantum algorithms are considered experimental, the X25519 key exchange serves as a fallback to ensure the classical security of the connection.

Clients participating in the experiment were split into 3 groups—those who initiated TLS handshake with post-quantum CECPQ2, CECPQ2b or non post-quantum X25519 public keys. Each group represented approximately one third of the Chrome Canary population participating in the experiment.

In order to distinguish between clients participating in or excluded from the experiment, we added a custom extension to the TLS handshake. It worked as a simple flag sent by clients and echoed back by Cloudflare edge servers. This allowed us to measure the duration of TLS handshakes only for clients participating in the experiment.

For each connection, we collected telemetry metrics. The most important metric was a TLS server-side handshake duration defined as the time between receiving the Client Hello and Client Finished messages. The diagram below shows details of what was measured and how post-quantum key exchange was integrated with TLS 1.3.

The TLS Post-Quantum Experiment

The experiment ran for 53 days in total, between August and October. During this time we collected millions of data samples, representing 5% of (anonymized) TLS connections that contained the extension signaling that the client was part of the experiment. We carried out the experiment in two phases.

In the first phase of the experiment, each client was assigned to use one of the three key exchange groups, and each client offered the same key exchange group for every connection. We collected over 10 million records over 40 days.

In the second phase of the experiment, client behavior was modified so that each client randomly chose which key exchange group to offer for each new connection, allowing us to directly compare the performance of each algorithm on a per-client basis. Data collection for this phase lasted 13 days and we collected 270 thousand records.


We now describe our server-side measurement results. Client-side results are described at

What did we find?

The primary metric we collected for each connection was the server-side handshake duration. The below histograms show handshake duration timings for all client measurements gathered in the first phase of the experiment, as well as breakdowns into the top five operating systems. The operating system breakdowns shown are restricted to only desktop/laptop devices except for Android, which consists of only mobile devices.

The TLS Post-Quantum Experiment

It’s clear from the above plots that for most clients, CECPQ2b performs worse than CECPQ2 and CONTROL. Thus, the small key size of CECPQ2b does not make up for its large computational cost—the ostrich outpaces the turkey.

Digging a little deeper

This means we’re done, right? Not quite. We are interested in determining if there are any populations of TLS clients for which CECPQ2b consistency outperforms CECPQ2. This requires taking a closer look at the long tail of handshake durations. The below plots show cumulative distribution functions (CDFs) of handshake timings zoomed in on the 80th percentile (e.g., showing the top 20% of slowest handshakes).

The TLS Post-Quantum Experiment

Here, we start to see something interesting. For Android, Linux, and Windows devices, there is a crossover point where CECPQ2b actually starts to outperform CECPQ2 (Android: ~94th percentile, Linux: ~92nd percentile, Windows: ~95th percentile). macOS and ChromeOS do not appear to have these crossover points.

These effects are small but statistically significant in some cases. The below table shows approximate 95% confidence intervals for the 50th (median), 95th, and 99th percentiles of handshake durations for each key exchange group and device type, calculated using Maritz-Jarrett estimators. The numbers within square brackets give the lower and upper bounds on our estimates for each percentile of the “true” distribution of handshake durations based on the samples collected in the experiment. For example, with a 95% confidence level we can say that the 99th percentile of handshake durations for CECPQ2 on Android devices lies between 4057ms and 4478ms, while the 99th percentile for CECPQ2b lies between 3276ms and 3646ms. Since the intervals do not overlap, we say that with statistical significance, the experiment indicates that CECPQ2b performs better than CECPQ2 for the slowest 1% of Android connections. Configurations where CECPQ2 or CECPQ2b outperforms the other with statistical significance are marked with green in the table.

The TLS Post-Quantum Experiment

Per-client comparison

A second phase of the experiment directly examined the performance of each key exchange algorithm for individual clients, where a client is defined to be a unique (anonymized) IP address and user agent pair. Instead of choosing a single key exchange algorithm for the duration of the experiment, clients randomly selected one of the experiment configurations for each new connection. Although the duration and sample size were limited for this phase of the experiment, we collected at least three handshake measurements for each group configuration from 3900 unique clients.

The plot below shows for each of these clients the difference in latency between CECPQ2 and CECPQ2b, taking the minimum latency sample for each key exchange group as the representative value. The CDF plot shows that for 80% of clients, CECPQ2 outperformed or matched CECPQ2b, and for 99% of clients, the latency gap remained within 70ms. At a high level, this indicates that very few clients performed significantly worse with CECPQ2 over CECPQ2b.

The TLS Post-Quantum Experiment

Do other factors impact the latency gap?

We looked at a number of other factors—including session resumption, IP version, and network location—to see if they impacted the latency gap between CECPQ2 and CECPQ2b. These factors impacted the overall handshake latency, but we did not find that any made a significant impact on the latency gap between post-quantum ciphers. We share some interesting observations from this analysis below.

Session resumption

Approximately 53% of all connections in the experiment were completed with TLS handshake resumption. However, the percentage of resumed connections varied significantly based on the device configuration. Connections from mobile devices were only resumed ~25% of the time, while between 40% and 70% of connections from laptop/desktop devices were resumed. Additionally, resumption provided between a 30% and 50% speedup for all device types.

IP version

We also examined the impact of IP version on handshake latency. Only 12.5% of the connections in the experiment used IPv6. These connections were 20-40% faster than IPv4 connections for desktop/laptop devices, but ~15% slower for mobile devices. This could be an artifact of IPv6 being generally deployed on newer devices with faster processors. For Android, the experiment was only run on devices with more modern processors, which perhaps eliminated the bias.

Network location

The slow connections making up the long tail of handshake durations were not isolated to a few countries, Autonomous Systems (ASes), or subnets, but originated from a globally diverse set of clients. We did not find a correlation between the relative performance of the two post-quantum key exchange algorithms based on these factors.


We found that CECPQ2 (the ostrich) outperformed CECPQ2b (the turkey), for the majority of connections in the experiment, indicating that fast algorithms with large keys may be more suitable for TLS than slow algorithms with small keys. However, we observed the opposite—that CECPQ2b outperformed CECPQ2—for the slowest connections on some devices, including Windows computers and Android mobile devices. One possible explanation for this is packet fragmentation and packet loss. The maximum size of TCP packets that can be sent across a network is limited by the maximum transmission unit (MTU) of the network path, which is often ~1400 bytes. During the TLS handshake the server responds to the client with its public key and ciphertext, the combined size of which exceeds the MTU, so it is likely that handshake messages must be split across multiple TCP packets. This increases the risk of lost packets and delays due to retransmission. A repeat of this experiment that includes collection of fine-grained TCP telemetry could confirm this hypothesis.

A somewhat surprising result of this experiment is just how fast HRSS performs for the majority of connections. Recall that the CECPQ2 cipher performs key exchange operations for both X25519 and HRSS, but the additional overhead of HRSS is barely noticeable. Comparing benchmark results, we can see that HRSS will be faster than X25519 on the server side and slower on the client side.

The TLS Post-Quantum Experiment

In our design, the client side performs two operations—key generation and KEM decapsulation. Looking at those two operations we can see that the key generation is a bottleneck here.

Key generation: 	3553.5 [ops/sec]
KEM decapsulation: 	17186.7 [ops/sec]

In algorithms with quotient-style keys (like NTRU), the key generation algorithm performs an inversion in the quotient ring—an operation that is quite computationally expensive. Alternatively, a TLS implementation could generate ephemeral keys ahead of time in order to speed up key exchange. There are several other lattice-based key exchange candidates that may be worth experimenting with in the context of TLS key exchange, which are based on different underlying principles than the HRSS construction. These candidates have similar key sizes and faster key generation algorithms, but have their own drawbacks. For now, HRSS looks like the more promising algorithm for use in TLS.

In the case of SIKE, we implemented the most recent version of the algorithm, and instantiated it with the most performance-efficient parameter set for our experiment. The algorithm is computationally expensive, so we were required to use assembly to optimize it. In order to ensure best performance on Intel, most performance-critical operations have two different implementations; the library detects CPU capabilities and uses faster instructions if available, but otherwise falls back to a slightly slower generic implementation. We developed our own optimizations for 64-bit ARM CPUs. Nevertheless, our results show that SIKE incurred a significant overhead for every connection, especially on devices with weaker processors. It must be noted that high-performance isogeny-based public key cryptography is arguably much less developed than its lattice-based counterparts. Some ideas to develop this are floating around, and we expect to see performance improvements in the future.

The TLS Post-Quantum Experiment

Tuesday, 29 October


What’s new in Fedora 31 Workstation [Fedora Magazine]

Fedora 31 Workstation is the latest release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora 31 Workstation. Read more details below.

Fedora 31 Workstation includes the latest release of GNOME Desktop Environment for users of all types. GNOME 3.34 in Fedora 31 Workstation includes many updates and improvements, including:

Refreshed Background Chooser

Choosing your desktop background in Fedora Workstation is now easier. The newly redesigned background chooser allows you to quickly and easily see and change both your desktop and lock screen backgrounds

Custom Application Folders

Fedora 31 Workstation now allows you to easily create application folders in the Overview. Keep your application listing clutter free and well organized with this new feature:

Do you want the full details of everything in GNOME 3.34? Visit the release notes for even more details.


Upgrading Fedora 30 to Fedora 31 [Fedora Magazine]

Fedora 31 is available now. You’ll likely want to upgrade your system to get the latest features available in Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 30 to Fedora 31.

Before upgrading, visit the wiki page of common Fedora 31 bugs to see if there’s an issue that might affect your upgrade. Although the Fedora community tries to ensure upgrades work well, there’s no way to guarantee this for every combination of hardware and software that users might have.

Upgrading Fedora 30 Workstation to Fedora 31

Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the GNOME Software app. Or you can choose Software from GNOME Shell.

Choose the Updates tab in GNOME Software and you should see a screen informing you that Fedora 31 is Now Available.

If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.

Choose Download to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.

Using the command line

If you’ve upgraded from past Fedora releases, you are likely familiar with the dnf upgrade plugin. This method is the recommended and supported way to upgrade from Fedora 30 to Fedora 31. Using this plugin will make your upgrade to Fedora 31 simple and easy.

1. Update software and back up your system

Before you do start the upgrade process, make sure you have the latest software for Fedora 30. This is particularly important if you have modular software installed; the latest versions of dnf and GNOME Software include improvements to the upgrade process for some modular streams. To update your software, use GNOME Software or enter the following command in a terminal.

sudo dnf upgrade --refresh

Additionally, make sure you back up your system before proceeding. For help with taking a backup, see the backup series on the Fedora Magazine.

2. Install the DNF plugin

Next, open a terminal and type the following command to install the plugin:

sudo dnf install dnf-plugin-system-upgrade

3. Start the update with DNF

Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:

sudo dnf system-upgrade download --releasever=31

This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the ‐‐allowerasing flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.

4. Reboot and upgrade

Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:

sudo dnf system-upgrade reboot

Your system will restart after this. Many releases ago, the fedup tool would create a new option on the kernel selection / boot screen. With the dnf-plugin-system-upgrade package, your system reboots into the current kernel installed for Fedora 30; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.

Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 31 system.

Upgrading Fedora: Upgrade complete!

Resolving upgrade problems

On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the DNF system upgrade quick docs for more information on troubleshooting.

If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.


Fedora 31 is officially here! [Fedora Magazine]

It’s here! We’re proud to announce the release of Fedora 31. Thanks to the hard work of thousands of Fedora community members and contributors, we’re celebrating yet another on-time release. This is getting to be a habit!

If you just want to get to the bits without delay, go to right now. For details, read on!


If you haven’t used the Fedora Toolbox, this is a great time to try it out. This is a simple tool for launching and managing personal workspace containers, so you can do development or experiment in an isolated experience. It’s as simple as running “toolbox enter” from the command line.

This containerized workflow is vital for users of the ostree-based Fedora variants like CoreOS, IoT, and Silverblue, but is also extremely useful on any workstation or even server system. Look for many more enhancements to this tool and the user experience around it in the next few months — your feedback is very welcome.

All of Fedora’s Flavors

Fedora Editions are targeted outputs geared toward specific “showcase” uses.

Fedora Workstation focuses on the desktop, and particular software developers who want a “just works” Linux operating system experience. This release features GNOME 3.34, which brings significant performance enhancements which will be especially noticeable on lower-powered hardware.

Fedora Server brings the latest in cutting-edge open source server software to systems administrators in an easy-to-deploy fashion.

And, in preview state, we have Fedora CoreOS, a category-defining operating system made for the modern container world, and Fedora IoT for “edge computing” use cases. (Stay tuned for a planned contest to find a shiny name for the IoT edition!)

Of course, we produce more than just the editions. Fedora Spins and Labs target a variety of audiences and use cases, including the Fedora Astronomy, which brings a complete open source toolchain to both amateur and professional astronomers, and desktop environments like KDE Plasma and Xfce.

And, don’t forget our alternate architectures, ARM AArch64, Power, and S390x. Of particular note, we have improved support for the Rockchip system-on-a-chip devices including the Rock960, RockPro64,  and Rock64, plus initial support for “panfrost”, an open source 3D accelerated graphics driver for newer Arm Mali “midgard” GPUs.

If you’re using an older 32-bit only i686 system, though, it’s time to find an alternative — we bid farewell to 32-bit Intel architecture as a base system this release.

General improvements

No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. Following our “First” foundation, we’re enabling CgroupsV2 (if you’re using Docker, make sure to check this out). Glibc 2.30  and NodeJS 12 are among the many updated packages in Fedora 31. And, we’ve switched the “python” command to by Python 3 — remember, Python 2 is end-of-life at the end of this year.

We’re excited for you to try out the new release! Go to and download it now. Or if you’re already running a Fedora operating system, follow the easy upgrade instructions.

In the unlikely event of a problem….

If you run into a problem, check out the Fedora 31 Common Bugs page, and if you have questions, visit our Ask Fedora user-support platform.

Thank you everyone

Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release. And if you’re in Portland for USENIX LISA this week, stop by the expo floor and visit me at the Red Hat, Fedora, and CentOS booth.


DNS Encryption Explained [The Cloudflare Blog]

DNS Encryption Explained
DNS Encryption Explained

The Domain Name System (DNS) is the address book of the Internet. When you visit or any other site, your browser will ask a DNS resolver for the IP address where the website can be found. Unfortunately, these DNS queries and answers are typically unprotected. Encrypting DNS would improve user privacy and security. In this post, we will look at two mechanisms for encrypting DNS, known as DNS over TLS (DoT) and DNS over HTTPS (DoH), and explain how they work.

Applications that want to resolve a domain name to an IP address typically use DNS. This is usually not done explicitly by the programmer who wrote the application. Instead, the programmer writes something such as fetch("") and expects a software library to handle the translation of “” to an IP address.

Behind the scenes, the software library is responsible for discovering and connecting to the external recursive DNS resolver and speaking the DNS protocol (see the figure below) in order to resolve the name requested by the application. The choice of the external DNS resolver and whether any privacy and security is provided at all is outside the control of the application. It depends on the software library in use, and the policies provided by the operating system of the device that runs the software.

DNS Encryption Explained
Overview of DNS query and response

The external DNS resolver

The operating system usually learns the resolver address from the local network using Dynamic Host Configuration Protocol (DHCP). In home and mobile networks, it typically ends up using the resolver from the Internet Service Provider (ISP). In corporate networks, the selected resolver is typically controlled by the network administrator. If desired, users with control over their devices can override the resolver with a specific address, such as the address of a public resolver like Google’s or Cloudflare’s, but most users will likely not bother changing it when connecting to a public Wi-Fi hotspot at a coffee shop or airport.

The choice of external resolver has a direct impact on the end-user experience. Most users do not change their resolver settings and will likely end up using the DNS resolver from their network provider. The most obvious observable property is the speed and accuracy of name resolution. Features that improve privacy or security might not be immediately visible, but will help to prevent others from profiling or interfering with your browsing activity. This is especially important on public Wi-Fi networks where anyone in physical proximity can capture and decrypt wireless network traffic.

Unencrypted DNS

Ever since DNS was created in 1987, it has been largely unencrypted. Everyone between your device and the resolver is able to snoop on or even modify your DNS queries and responses. This includes anyone in your local Wi-Fi network, your Internet Service Provider (ISP), and transit providers. This may affect your privacy by revealing the domain names that are you are visiting.

What can they see? Well, consider this network packet capture taken from a laptop connected to a home network:

DNS Encryption Explained

The following observations can be made:

  • The UDP source port is 53 which is the standard port number for unencrypted DNS. The UDP payload is therefore likely to be a DNS answer.
  • That suggests that the source IP address is a DNS resolver while the destination IP is the DNS client.
  • The UDP payload could indeed be parsed as a DNS answer, and reveals that the user was trying to visit
  • If there are any future connections to or, then it is most likely traffic that is directed at “”.
  • If there is some further encrypted HTTPS traffic to this IP, succeeded by more DNS queries, it could indicate that a web browser loaded additional resources from that page. That could potentially reveal the pages that a user was looking at while visiting

Since the DNS messages are unprotected, other attacks are possible:

  • Queries could be directed to a resolver that performs DNS hijacking. For example, in the UK, Virgin Media and BT return a fake response for domains that do not exist, redirecting users to a search page. This redirection is possible because the computer/phone blindly trusts the DNS resolver that was advertised using DHCP by the ISP-provided gateway router.
  • Firewalls can easily intercept, block or modify any unencrypted DNS traffic based on the port number alone. It is worth noting that plaintext inspection is not a silver bullet for achieving visibility goals, because the DNS resolver can be bypassed.

Encrypting DNS

Encrypting DNS makes it much harder for snoopers to look into your DNS messages, or to corrupt them in transit. Just as the web moved from unencrypted HTTP to encrypted HTTPS there are now upgrades to the DNS protocol that encrypt DNS itself. Encrypting the web has made it possible for private and secure communications and commerce to flourish. Encrypting DNS will further enhance user privacy.

Two standardized mechanisms exist to secure the DNS transport between you and the resolver, DNS over TLS (2016) and DNS Queries over HTTPS (2018). Both are based on Transport Layer Security (TLS) which is also used to secure communication between you and a website using HTTPS. In TLS, the server (be it a web server or DNS resolver) authenticates itself to the client (your device) using a certificate. This ensures that no other party can impersonate the server (the resolver).

With DNS over TLS (DoT), the original DNS message is directly embedded into the secure TLS channel. From the outside, one can neither learn the name that was being queried nor modify it. The intended client application will be able to decrypt TLS, it looks like this:

DNS Encryption Explained

In the packet trace for unencrypted DNS, it was clear that a DNS request can be sent directly by the client, followed by a DNS answer from the resolver. In the encrypted DoT case however, some TLS handshake messages are exchanged prior to sending encrypted DNS messages:

  • The client sends a Client Hello, advertising its supported TLS capabilities.
  • The server responds with a Server Hello, agreeing on TLS parameters that will be used to secure the connection. The Certificate message contains the identity of the server while the Certificate Verify message will contain a digital signature which can be verified by the client using the server Certificate. The client typically checks this certificate against its local list of trusted Certificate Authorities, but the DoT specification mentions alternative trust mechanisms such as public key pinning.
  • Once the TLS handshake is Finished by both the client and server, they can finally start exchanging encrypted messages.
  • While the above picture contains one DNS query and answer, in practice the secure TLS connection will remain open and will be reused for future DNS queries.

Securing unencrypted protocols by slapping TLS on top of a new port has been done before:

  • Web traffic: HTTP (tcp/80) -> HTTPS (tcp/443)
  • Sending email: SMTP (tcp/25) -> SMTPS (tcp/465)
  • Receiving email: IMAP (tcp/143) -> IMAPS (tcp/993)
  • Now: DNS (tcp/53 or udp/53) -> DoT (tcp/853)

A problem with introducing a new port is that existing firewalls may block it. Either because they employ a whitelist approach where new services have to be explicitly enabled, or a blocklist approach where a network administrator explicitly blocks a service. If the secure option (DoT) is less likely to be available than its insecure option, then users and applications might be tempted to try to fall back to unencrypted DNS. This subsequently could allow attackers to force users to an insecure version.

Such fallback attacks are not theoretical. SSL stripping has previously been used to downgrade HTTPS websites to HTTP, allowing attackers to steal passwords or hijack accounts.

Another approach, DNS Queries over HTTPS (DoH), was designed to support two primary use cases:

  • Prevent the above problem where on-path devices interfere with DNS. This includes the port blocking problem above.
  • Enable web applications to access DNS through existing browser APIs.
    DoH is essentially HTTPS, the same encrypted standard the web uses, and reuses the same port number (tcp/443). Web browsers have already deprecated non-secure HTTP in favor of HTTPS. That makes HTTPS a great choice for securely transporting DNS messages. An example of such a DoH request can be found here.
DNS Encryption Explained
DoH: DNS query and response transported over a secure HTTPS stream

Some users have been concerned that the use of HTTPS could weaken privacy due to the potential use of cookies for tracking purposes. The DoH protocol designers considered various privacy aspects and explicitly discouraged use of HTTP cookies to prevent tracking, a recommendation that is widely respected. TLS session resumption improves TLS 1.2 handshake performance, but can potentially be used to correlate TLS connections. Luckily, use of TLS 1.3 obviates the need for TLS session resumption by reducing the number of round trips by default, effectively addressing its associated privacy concern.

Using HTTPS means that HTTP protocol improvements can also benefit DoH. For example, the in-development HTTP/3 protocol, built on top of QUIC, could offer additional performance improvements in the presence of packet loss due to lack of head-of-line blocking. This means that multiple DNS queries could be sent simultaneously over the secure channel without blocking each other when one packet is lost.

A draft for DNS over QUIC (DNS/QUIC) also exists and is similar to DoT, but without the head-of-line blocking problem due to the use of QUIC. Both HTTP/3 and DNS/QUIC, however, require a UDP port to be accessible. In theory, both could fall back to DoH over HTTP/2 and DoT respectively.

Deployment of DoT and DoH

As both DoT and DoH are relatively new, they are not universally deployed yet. On the server side, major public resolvers including Cloudflare’s and Google DNS support it. Many ISP resolvers however still lack support for it. A small list of public resolvers supporting DoH can be found at DNS server sources, another list of public resolvers supporting DoT and DoH can be found on DNS Privacy Public Resolvers.

There are two methods to enable DoT or DoH on end-user devices:

  • Add support to applications, bypassing the resolver service from the operating system.
  • Add support to the operating system, transparently providing support to applications.

There are generally three configuration modes for DoT or DoH on the client side:

  • Off: DNS will not be encrypted.
  • Opportunistic mode: try to use a secure transport for DNS, but fallback to unencrypted DNS if the former is unavailable. This mode is vulnerable to downgrade attacks where an attacker can force a device to use unencrypted DNS. It aims to offer privacy when there are no on-path active attackers.
  • Strict mode: try to use DNS over a secure transport. If unavailable, fail hard and show an error to the user.

The current state for system-wide configuration of DNS over a secure transport:

  • Android 9: supports DoT through its “Private DNS” feature. Modes:
    • Opportunistic mode (“Automatic”) is used by default. The resolver from network settings (typically DHCP) will be used.
    • Strict mode can be configured by setting an explicit hostname. No IP address is allowed, the hostname is resolved using the default resolver and is also used for validating the certificate. (Relevant source code)
  • iOS and Android users can also install the app to enable either DoH or DoT support in strict mode. Internally it uses the VPN programming interfaces to enable interception of unencrypted DNS traffic before it is forwarded over a secure channel.
  • Linux with systemd-resolved from systemd 239: DoT through the DNSOverTLS option.
    • Off is the default.
    • Opportunistic mode can be configured, but no certificate validation is performed.
    • Strict mode is available since systemd 243. Any certificate signed by a trusted certificate authority is accepted. However, there is no hostname validation with the GnuTLS backend while the OpenSSL backend expects an IP address.
    • In any case, no Server Name Indication (SNI) is sent. The certificate name is not validated, making a man-in-the-middle rather trivial.
  • Linux, macOS, and Windows can use a DoH client in strict mode. The cloudflared proxy-dns command uses the Cloudflare DNS resolver by default, but users can override it through the proxy-dns-upstream option.

Web browsers support DoH instead of DoT:

  • Firefox 62 supports DoH and provides several Trusted Recursive Resolver (TRR) settings. By default DoH is disabled, but Mozilla is running an experiment to enable DoH for some users in the USA. This experiment currently uses Cloudflare's resolver, since we are the only provider that currently satisfies the strict resolver policy required by Mozilla. Since many DNS resolvers still do not support an encrypted DNS transport, Mozilla's approach will ensure that more users are protected using DoH.
    • When enabled through the experiment, or through the “Enable DNS over HTTPS” option at Network Settings, Firefox will use opportunistic mode (network.trr.mode=2 at about:config).
    • Strict mode can be enabled with network.trr.mode=3, but requires an explicit resolver IP to be specified (for example, network.trr.bootstrapAddress=
    • While Firefox ignores the default resolver from the system, it can be configured with alternative resolvers. Additionally, enterprise deployments who use a resolver that does not support DoH have the option to disable DoH.
  • Chrome 78 enables opportunistic DoH if the system resolver address matches one of the hard-coded DoH providers (source code change). This experiment is enabled for all platforms except Linux and iOS, and excludes enterprise deployments by default.
  • Opera 65 adds an option to enable DoH through Cloudflare's resolver. This feature is off by default. Once enabled, it appears to use opportunistic mode: if (without SNI) is reachable, it will be used. Otherwise it falls back to the default resolver, unencrypted.

The DNS over HTTPS page from the curl project has a comprehensive list of DoH providers and additional implementations.

As an alternative to encrypting the full network path between the device and the external DNS resolver, one can take a middle ground: use unencrypted DNS between devices and the gateway of the local network, but encrypt all DNS traffic between the gateway router and the external DNS resolver. Assuming a secure wired or wireless network, this would protect all devices in the local network against a snooping ISP, or other adversaries on the Internet. As public Wi-Fi hotspots are not considered secure, this approach would not be safe on open Wi-Fi networks. Even if it is password-protected with WPA2-PSK, others will still be able to snoop and modify unencrypted DNS.

Other security considerations

The previous sections described secure DNS transports, DoH and DoT. These will only ensure that your client receives the untampered answer from the DNS resolver. It does not, however, protect the client against the resolver returning the wrong answer (through DNS hijacking or DNS cache poisoning attacks). The “true” answer is determined by the owner of a domain or zone as reported by the authoritative name server. DNSSEC allows clients to verify the integrity of the returned DNS answer and catch any unauthorized tampering along the path between the client and authoritative name server.

However deployment of DNSSEC is hindered by middleboxes that incorrectly forward DNS messages, and even if the information is available, stub resolvers used by applications might not even validate the results. A report from 2016 found that only 26% of users use DNSSEC-validating resolvers.

DoH and DoT protect the transport between the client and the public resolver. The public resolver may have to reach out to additional authoritative name servers in order to resolve a name. Traditionally, the path between any resolver and the authoritative name server uses unencrypted DNS. To protect these DNS messages as well, we did an experiment with Facebook, using DoT between and Facebook’s authoritative name servers. While setting up a secure channel using TLS increases latency, it can be amortized over many queries.

Transport encryption ensures that resolver results and metadata are protected. For example, the EDNS Client Subnet (ECS) information included with DNS queries could reveal the original client address that started the DNS query. Hiding that information along the path improves privacy. It will also prevent broken middle-boxes from breaking DNSSEC due to issues in forwarding DNS.

Operational issues with DNS encryption

DNS encryption may bring challenges to individuals or organizations that rely on monitoring or modifying DNS traffic. Security appliances that rely on passive monitoring watch all incoming and outgoing network traffic on a machine or on the edge of a network. Based on unencrypted DNS queries, they could potentially identify machines which are infected with malware for example. If the DNS query is encrypted, then passive monitoring solutions will not be able to monitor domain names.

Some parties expect DNS resolvers to apply content filtering for purposes such as:

  • Blocking domains used for malware distribution.
  • Blocking advertisements.
  • Perform parental control filtering, blocking domains associated with adult content.
  • Block access to domains serving illegal content according to local regulations.
  • Offer a split-horizon DNS to provide different answers depending on the source network.

An advantage of blocking access to domains via the DNS resolver is that it can be centrally done, without reimplementing it in every single application. Unfortunately, it is also quite coarse. Suppose that a website hosts content for multiple users at and The DNS resolver will only be able to see “” and can either choose to block it or not. In this case, application-specific controls such as browser extensions would be more effective since they can actually look into the URLs and selectively prevent content from being accessible.

DNS monitoring is not comprehensive. Malware could skip DNS and hardcode IP addresses, or use alternative methods to query an IP address. However, not all malware is that complicated, so DNS monitoring can still serve as a defence-in-depth tool.

All of these non-passive monitoring or DNS blocking use cases require support from the DNS resolver. Deployments that rely on opportunistic DoH/DoT upgrades of the current resolver will maintain the same feature set as usually provided over unencrypted DNS. Unfortunately this is vulnerable to downgrades, as mentioned before. To solve this, system administrators can point endpoints to a DoH/DoT resolver in strict mode. Ideally this is done through secure device management solutions (MDM, group policy on Windows, etc.).


One of the cornerstones of the Internet is mapping names to an address using DNS. DNS has traditionally used insecure, unencrypted transports. This has been abused by ISPs in the past for injecting advertisements, but also causes a privacy leak. Nosey visitors in the coffee shop can use unencrypted DNS to follow your activity. All of these issues can be solved by using DNS over TLS (DoT) or DNS over HTTPS (DoH). These techniques to protect the user are relatively new and are seeing increasing adoption.

From a technical perspective, DoH is very similar to HTTPS and follows the general industry trend to deprecate non-secure options. DoT is a simpler transport mode than DoH as the HTTP layer is removed, but that also makes it easier to be blocked, either deliberately or by accident.

Secondary to enabling a secure transport is the choice of a DNS resolver. Some vendors will use the locally configured DNS resolver, but try to opportunistically upgrade the unencrypted transport to a more secure transport (either DoT or DoH). Unfortunately, the DNS resolver usually defaults to one provided by the ISP which may not support secure transports.

Mozilla has adopted a different approach. Rather than relying on local resolvers that may not even support DoH, they allow the user to explicitly select a resolver. Resolvers recommended by Mozilla have to satisfy high standards to protect user privacy. To ensure that parental control features based on DNS remain functional, and to support the split-horizon use case, Mozilla has added a mechanism that allows private resolvers to disable DoH.

The DoT and DoH transport protocols are ready for us to move to a more secure Internet. As can be seen in previous packet traces, these protocols are similar to existing mechanisms to secure application traffic. Once this security and privacy hole is closed, there will be many more to tackle.

DNS Encryption Explained



Saturday Morning Breakfast Cereal - Body Language [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

I am willing to do no work while telling developers to make this, in exchange for 10 million dollars an series A funding.

Today's News:

The new book is OUT!


50 Years of The Internet. Work in Progress to a Better Internet [The Cloudflare Blog]

50 Years of The Internet. Work in Progress to a Better Internet

It was fifty years ago when the very first network packet took flight from the Los Angeles campus at UCLA to the Stanford Research Institute (SRI) building in Palo Alto. Those two California sites had kicked-off the world of packet networking, of the Arpanet, and of the modern Internet as we use and know it today. Yet by the time the third packet had been transmitted that evening, the receiving computer at SRI had crashed. The “L” and “O” from the word “LOGIN” had been transmitted successfully in their packets; but that “G”, wrapped in its own packet, caused the death of that nascent packet network setup. Even today, software crashes, that’s a solid fact; but this historic crash, is exactly that — historic.

50 Years of The Internet. Work in Progress to a Better Internet
Courtesy of MIT Advanced Network Architecture Group 

So much has happened since that day (October 29’th to be exact) in 1969, in fact it’s an understatement to say “so much has happened”! It’s unclear that one blog article would ever be able to capture the full history of packets from then to now. Here at Cloudflare we say we are helping build a “better Internet”, so it would make perfect sense for us to honor the history of the Arpanet and its successor, the Internet, by focusing on some of the other folks that have helped build a better Internet.

Leonard Kleinrock, Steve Crocker, and crew - those first packets

Nothing takes away from what happened that October day. The move from a circuit-based networking mindset to a packet-based network is momentus. The phrase net-heads vs bell-heads was born that day - and it’s still alive today! The basics of why the Internet became a permissionless innovation was instantly created the moment that first packet traversed that network fifty years ago.

50 Years of The Internet. Work in Progress to a Better Internet
Courtesy of UCLA

Professor Leonard (Len) Kleinrock continued to work on the very-basics of packet networking. The network used on that day expanded from two nodes to four nodes (in 1969, one IMP was delivered each month from BBN to various university sites) and created a network that spanned the USA from coast to coast and then beyond.

50 Years of The Internet. Work in Progress to a Better Internet
ARPANET logical map 1973 via Wikipedia 

In the 1973 map there’s a series of boxes marked TIP. These are a version of the IMP that was used to connect computer terminals along with computers (hosts) to the ARPANET. Every IMP and TIP was managed by Bolt, Beranek and Newman (BBN), based in Cambridge Mass. This is vastly different from today’s Internet where every network is operated autonomously.

By 1977 the ARPANET had grown further with links from the United States mainland to Hawaii plus links to Norway and the United Kingdom.

50 Years of The Internet. Work in Progress to a Better Internet
ARPANET logical map 1977 via Wikipedia

Focusing back to that day in 1969, Steve Crocker (who was a graduate student at UCLA at that time) headed up the development of the NCP software. The Network Control Program (later remembered as Network Control Protocol) provided the host to host transmission control software stack. Early versions of telnet and FTP ran atop NCP.

During this journey both Len Kleinrock, Steve Crocker, and the other early packet pioneers have always been solid members of the Internet community and continue to deliver daily to a better Internet.

Steve Crocker and Bill Duvall have written a guest blog about that day fifty years ago. Please read it after you've finished reading this blog.

BTW: Today, on this 50th anniversary, UCLA is celebrating history via this symposium (see also

Their collective accomplishments are extensive and still relevant today.

Vint Cerf and Bob Kahn - the creation of TCP/IP

In 1973 Vint Cerf was asked to work on a protocol to replace the original NCP protocol. The new protocol is now known as TCP/IP. Of course, everyone had to move from NCP to TCP and that was outlined in RFC801. At the time (1982 and 1983) there were around 200 to 250 hosts on the ARPANET, yet that transition was still a major undertaking.

Finally, on January 1st, 1983, fourteen years after that first packet flowed, the NCP protocol was retired and TCP/IP was enabled. The ARPANET got what would become the Internet’s first large scale addressing scheme (IPv4). This was better in so many ways; but in reality, this transition was just one more stepping stone towards our modern and better Internet.

Jon Postel - The RFCs, The numbers, The legacy

Some people write code, some people write documents, some people organize documents, some people organize numbers. Jon Postel did all of these things. Jon was the first person to be in charge of allocating numbers (you know - IP addresses) back in the early 80’s. In a way it was a thankless job that no-one else wanted to do. Jon was also the keeper of the early documents (Request For Comment or RFCs) that provide us with how the packet network should operate. Everything was available so that anyone could write code and join the network. Everyone was also able to write a fresh document (or update an existing document) so that the ecosystem of the Arpanet could grow. Some of those documents are still in existence and referenced today. RFC791 defines the IP protocol and is dated 1981 - it’s still an active document in-use today! Those early days and Jon’s massive contributions have been well documented and acknowledged. A better Internet is impossible without these conceptual building blocks.

Jon passed away in 1998; however, his legacy and his thoughts are still in active use today. He once said within the TCP world: “Be conservative in what you send, be liberal in what you accept”. This is called the robustness principle and it’s still key to writing good network protocol software.

Bill Joy & crew - Berkeley BSD Unix 4.2 and its TCP/IP software

What’s the use of a protocol if you don’t have software to speak it. In the early 80’s there were many efforts to build both affordable and fast hardware, along with the software to speak to that hardware. At the University of California, Berkeley (UCB) there was a group of software developers tasked in 1980 by the Defense Advanced Research Projects Agency (DARPA) to implement the brand-new TCP/IP protocol stack on the VAX under Unix. They not-only solved that task; but they went a long way further than just that goal.

The folks at UCB (Bill Joy, Marshall Kirk McKusick, Keith Bostic, Michael Karels, and others) created an operating system called 4.2BSD (Berkeley Software Distribution) that came with TCP/IP ingrained in its core. It was based on the AT&T’s Unix v6 and Unix/32V; however it had significantly deviated in many ways. The networking code, or sockets as its interface is called, became the underlying building blocks of each and every piece of networking software in the modern world of the Internet. We at Cloudflare have written numerous times about networking kernel code and it all boils down to the code that was written back at UCB. Bill Joy went on to be a founder of Sun Microsystems (which commercialized 4.2BSD and much more). Others from UCB went on to help build other companies that still are relevant to the Internet today.

Fun fact: Berkeley’s Unix (or FreeBSD, OpenBSD, NetBSD as its variants are known) is now the basis of every iPhone, iPad and Mac laptops software in existence. Android’s and Chromebooks come from a different lineage; but still hold those BSD methodologies as the fundamental basis of all their networking software.

Al Gore - The Information Superhighway - or retold as “funding the Internet”

Do you believe that Al Gore invented the Internet? It’s actually doesn’t matter which side of this statement you want to argue; the simple fact is that the US Government funded the National Science Foundation (NSF) with the task of building an “information superhighway”. Al Gore himself said: “how do we create a nationwide network of information superhighways? Obviously, the private sector is going to do it, but the Federal government can catalyze and accelerate the process. '' He said that statement on September 19, 1994 and this blog post author knows that fact because I was there in the room when he said it!

The United States Federal Government help fund the growth of the Arpanet into the early version of the Internet. Without the government's efforts, we may not have been where we are today. Luckily, just a handful of years later, the NSF decided that in fact the commercial world could and should be the main building blocks for the Internet and instantly the Internet as we know it today was born. Packets that fly across commercial backbones are paid for via commercial contracts. The parts that are still funded by the government (any government) are normally only the parts used by universities, or military users.

But this author is still going to thank Al Gore for helping create a better Internet back in the early 90’s.

Sir Tim Berners-Lee - The World Wide Web

What can I say? In 1989 Tim Berners-Lee (who was later knighted and is now Sir Tim) invented the World Wide Web and we would not have billions of people using the Internet today without him. Period!

50 Years of The Internet. Work in Progress to a Better Internet
via Reddit
50 Years of The Internet. Work in Progress to a Better Internet
via Reddit

Yeah, let's clear up that subtle point. Sir Tim invented the World Wide Web (WWW) and Vint Cerf invented the Internet. When folks talk about using one or the other, it’s worth reminding then there is a difference. But I digress!

Sir Tim’s creation is what provides everyday folks with a window into information on the Internet. Before the WWW we had textual interfaces to information; but only if you knew where to look and what to type. We really need to remember every time we click on a link or press submit to buy something, that the only way that is usable is such mass and uniform form is because of Sir Tim’s creation.

Sally Floyd - The subtle art of dropping packets

Random Early Detection (RED) is an algorithm that saved the Internet back in the early 90’s. Built on earlier work by Van Jacobson, it defined a method to drop packets when a router was overloaded, or more importantly about to be overloaded. Packet network, before Van Jacobson’s or Sally Floyd’s work, would congest heavily and slow down. It seemed natural to never throw away data; but between the two inventors of RED, that all changed. Her follow-up work is described in an August 1993 paper.

50 Years of The Internet. Work in Progress to a Better Internet

Networks have become much more complex since August 1993, yet the RED code still exists and is used in nearly every Unix or Linux kernel today. See the tc-red(8) command and/or the Linux kernel code itself.

It’s with great sorrow that Sally Floyd passed away in late August. But, rest assured, her algorithm will possibly be used to help keep a better Internet flowing smoothly forever.

Jay Adelson and Al Avery - The datacenter that interconnect networks

Remember that comment by Al Gore above saying that the private sector would build the Internet. Back in the late 90’s that’s exactly what happened. Telecom companies were selling capacity to fledgling ISPs. Nationwide IP backbones were being built by the likes of PSI, Netcom, UUnet, Digex, CAIS, ANS, etc. The telco’s themselves like MCI, Sprint, but interestingly not AT&T at the time, were getting into providing Internet access in a big way.

In the US everything was moving very fast. By the mid-90’s there was no way to get a connection anymore from a regional research network for your shiny new ISP. Everything had all gone commercial and the NSF funded parts of the Internet were not available for commercial packets.

The NSF, in it’s goal to allow commercial networks to build the Internet, had also specified that those networks should interconnect at four locations around the country. New Jersey, Chicago, Bay Area, California, and Washington DC area.

50 Years of The Internet. Work in Progress to a Better Internet
Network Access Point via Wikipedia

The NAP’s, as they were called, were to provide interconnection between networks and to provide the research networks a way to interconnect with commercial network along with themselves. The NAPs suddenly exploded in usage, near-instantly needing to be bigger, The buildings they were housed in ran out of space or power or both! Yet those networks needed homes, interconnections needed a better structure and the old buildings that were housing the Internet’s routers just didn’t cut it anymore.

Jay and Al had a vision. New massive datacenters that could securely house the growing need for the power-hungry Internet. But that’s only a small portion of the vision. They realized that if many networks all lived under the same roof then interconnecting them could indeed build a better Internet. They installed Internet Exchanges and a standardized way of cross-connecting from one network to another. They were carrier neutral, so that everyone was treated equal. It was, what became known as the “network effect” and it was a success. The more networks you had under one roof, the more that other networks would want to be housed within those same roofs. The company they created was (and still is) called Equinix. It wasn’t the first company to realize this; but it sure has become one of the biggest and most successful in this arena.

Today, a vast amount of the Internet uses Equinix datacenters, it’s IXs along with similar offerings from similar companies. Jay and Al’s vision absolutely paved the way to a better internet.

Everyone who’s a member of The Internet Society 1992-Today

It turns out that people realized that the modern Internet is not all-commercial all-the-time. There is a need for other influences to be had. Civil society, governments, academics, along with those commercial entities should also have a say in how the Internet evolves. This brings into the conversation a myriad of people that have either been members of The Internet Society (ISOC) and/or have worked directly for ISOC over it’s 27+ years. This is the organization that manages and helps fund the IETF (where protocols are discussed and standardized). ISOC plays a decisive role at The Internet Governance Forum (IGF), and fosters a clear understanding of how the Internet should be used and protected to both the general public and regulators worldwide. ISOCs involvement with Internet Exchange development (vital as the Internet grows and connects users and content) has been a game changer for many-many countries, especially in Africa.

ISOC has an interesting funding mechanism centered around the dotORG domain. You may not have realized that you were helping the Internet grow when you registered and paid for your .org domain; however, you are!

Over the life of ISOC, the Internet has moved from being the domain of engineers and scientists into something used by nearly everyone; independent of technical skill or in-fact a full understanding of it’s inner workings. ISOC’s mission is "to promote the open development, evolution and use of the Internet for the benefit of all people throughout the world". It has been a solid part of that growth.

Giving voice to everyone on how the Internet could grow and how it should (or should not be) regulated, is front-and-center for every person involved with ISOC globally. Defining both an inclusive Internet and a better Internet is the everyday job for those people.

Kanchana Kanchanasut - Thailand and .TH

In the 1988, amongst other things, Professor Kanchana Kanchanasut registered and operated the country Top Level Domain .TH (which is the two-letter ISO 3166 code for Thailand). This was the first country to have a TLD; something all countries take for granted today.

Also in 1988, five Thai universities got dial-up connections to the Internet because of her work. However, the real breakthrough came when Prof. Kanchanasut’s efforts led to the first leased line interconnecting Thailand to the nascent Internet of the early 90’s. That was 1991 and since then Thailand’s connectivity has exploded. It’s an amazingly well connected country. Today it boasts a plethora of mobile operators, and international undersea and cross-border cables, along with Prof. Kanchanasut’s present-day work spearheading an independent and growing Internet Exchange within Thailand.

In 2013, the "Mother of the Internet in Thailand" as she is affectionately called, was inducted into the Internet Hall of Fame by the Internet Society. If you’re in Thailand, or South East Asia, then she’s the reason why you have a better Internet.

The list continues

In the fifty years since that first packet there have been heros, both silent and profoundly vocal that have moved the Internet forward. There’s no was all could be named or called out; however, you will find many listed if you go look. Wander through the thousands of RFC’s, or check out the Internet Hall of Fame. The Internet today is a better Internet because anyone can be a contributor.

Cloudflare and the better Internet

Cloudflare, or in fact any part of the Internet, would not be where it is today without the groundbreaking work of these people plus many others unnamed here. This fifty year effort has moved the needle in such a way that without all of them the runaway success of the Internet could not have been possible!

Cloudflare is just over nine years old (that’s only 18% of this fifty year period). Gazillions and gazillions of packets have flowed since Cloudflare started providing it services and we sincerely believe we have done our part with those services to build a better Internet.. Oh, and we haven’t finished our work, far from it! We still have a long way to go in helping build a better Internet. And we’re just getting started!

If you’re interested in helping build a better Internet and want to join Cloudflare in our offices in San Francisco, Singapore, London, Austin, Sydney, Champaign, Munich, San Jose, New York or our new Lisbon Portugal offices, then buzz over to our jobs page and come join us! #betterInternet


Fifty Years Ago [The Cloudflare Blog]

Fifty Years Ago

This is a guest post by Steve Crocker of Shinkuro, Inc. and Bill Duvall of Consulair. Fifty years ago they were both present when the first packets flowed on the Arpanet.

On 29 October 2019, Professor Leonard (“Len”) Kleinrock is chairing a celebration at the University of California, Los Angeles (UCLA).  The date is the fiftieth anniversary of the first full system test and remote host-to-host login over the Arpanet.  Following a brief crash caused by a configuration problem, a user at UCLA was able to log in to the SRI SDS 940 time-sharing system.  But let us paint the rest of the picture.

The Arpanet was a bold project to connect sites within the ARPA-funded computer science research community and to use packet-switching as the technology for doing so.  Although there were parallel packet-switching research efforts around the globe, none were at the scale of the Arpanet project. Cooperation among researchers in different laboratories, applying multiple machines to a single problem and sharing of resources were all part of the vision.  And over the fifty years since then, the vision has been fulfilled, albeit with some undesired outcomes mixed in with the enormous benefits.  However, in this blog, we focus on just those early days.

In September 1969, Bolt, Beranek and Newman (BBN) in Cambridge, MA delivered the first Arpanet IMP (packet switch) to Len Kleinrock’s laboratory at UCLA. The Arpanet incorporated his theoretical work on packet switching and UCLA was chosen as the network measurement site for validation of his theories.  The second IMP was installed a month later at Doug Engelbart’s laboratory at the Stanford Research Institute – now called SRI International – in Menlo Park, California.  Engelbart had invented the mouse and his lab had developed a graphical interface for structured and hyperlinked text.  Engelbart’s vision saw computer users sharing information over a wide-scale network, so the Arpanet was a natural candidate for his work. Today, we have seen that vision travel from SRI to Xerox to Apple to Microsoft, and it is now a part of everyone’s environment.

“IMP” stood for Interface Message Processor; we would now simply say “router.” Each IMP was connected to up to four host computers.  At UCLA the first host was a Scientific Data Systems (SDS) Sigma 7.  At SRI, the host was an SDS 940.  Jon Postel, Vint Cerf and Steve Crocker were among the graduate students at UCLA involved in the design of the protocols between the hosts on the Arpanet, as were Bill Duvall, Jeff Rulifson, and others at SRI (see RFC 1 and RFC 2.)

SRI and UCLA quickly connected their hosts to the IMPs.  Duvall at SRI modified the SDS 940 time-sharing system to allow host to host terminal connections over the net. Charley Kline wrote the complementary client program at UCLA.  These efforts required building custom hardware for connecting the IMPs to the hosts, and programming for both the IMPs and the respective hosts.  At the time, systems programming was done either in assembly language or special purpose hybrid languages blending simple higher-level language features with assembler.  Notable examples were ESPOL for the Burroughs 5500 and PL/I for Multics.  Much of Engelbart’s NLS system was written in such a language, but the time-sharing system was written in assembler for efficiency and size considerations.

Along with the delivery of the IMPs, a deadline of October 31 was set for connecting the first hosts.  Testing was scheduled to begin on October 29 in order to allow a few days for necessary debugging and handling of unanticipated problems.   In addition to the high-speed line that connected the SRI and UCLA IMPs, there was a parallel open, dedicated voice line. On the evening of October 29 Duvall at SRI donned his headset as did Charley Kline at UCLA, and both host-IMP pairs were started. Charley typed an L, the first letter of a LOGIN command.  Duvall, tracking the activity at SRI, saw that the L was received, and that it launched a user login process within the 940. The 940 system was full duplex, so it echoed an “L” across the net to UCLA.  At UCLA, the L appeared on the terminal.  Success! Charley next typed O and received back O.  Charley typed G, and there was silence.  At SRI, Duvall quickly determined that an echo buffer had been sized too small[1], re-sized it, and restarted the system. Charley  typed “LO” again, and received back the normal “LOGIN”.  He typed a confirming RETURN, and the first host-to-host login on the Arpanet was completed.

Len Kleinrock noted that the first characters sent over the net were “LO.”  Sensing the importance of the event, he expanded “LO" to “Lo and Behold”, and used that in the title of the movie called “Lo and Behold: Reveries of the Connected World.”  See

Fifty Years Ago
Engelbart's five finger keyboard and mouse with three buttons. The mouse evolved and became ubiquitous. The five finger keyboard faded.

IMPs continued to be installed on the Arpanet at the rate of roughly one per month over the next two years.  Soon we had a spectacularly large network with more than twenty hosts, and the connections between the IMPs were permanent telephone lines operating at the lightning speed of 50,000 bits per second[2].

Fifty Years Ago
Len Kleinrock and IMP #1 at UCLA

Today, all computers come with hardware and software to communicate with other computers.  Not so back then.  Each computer was the center of its own world, and expected to be connected only to subordinate “peripheral” devices – printers, tape drives, etc.  Many even used different character sets.  There was no standard method for connecting two computers together, not even ones from the same manufacturer. Part of what made the Arpanet project bold was the diversity of the hardware and software at the research centers.  Almost all of the hosts at these sites were time-shared computers.  Typically, several people shared the same computer, and the computer processed each user’s computation a little bit at a time.  These computers were large and expensive.  Personal computers were fifteen years in the future, and smart phones were science fiction.  Even Dick Tracy’s fantasy two-way wrist radio envisioned only voice interaction, not instant access to databases and sharing of pictures and videos.

Fifty Years Ago
Dick Tracy and his two-way radio.

Each site had to create a hardware connection from the host(s) to the IMP. Further, each site had to add drivers or more to the operating system in its host(s) so that programs on the host could communicate with the IMP.  The protocols for host to host communication were in their infancy and unproven.

During those first two years when IMPs were being installed monthly, we met with students and researchers at the other sites to develop the first suite of protocols.  The bottom layer was forgettably named the Host-Host protocol[3].  Telnet, for emulating terminal dial-up, and the File Transfer Protocol (FTP) were on the next layer above the Host-Host protocol.  Email started as a special case of FTP and later evolved into its own protocol.  Other networks sprang up and the Arpanet became the seedling for the Internet, with TCP providing a reliable, two-way host to host connection, and IP below it stitching together the multiple networks of the Internet.  But the Telnet and FTP protocols continued for many years and are only recently being phased out in favor of more robust and more secure alternatives.

The hardware interfaces, the protocols and the software that implemented the protocols were the tangible engineering products of that early work.  Equally important was the social fabric and culture that we created.  We knew the system would evolve, so we envisioned an open and evolving architecture.  Many more protocols would be created, and the process is now embodied in the Internet Engineering Task Force (IETF).  There was also a strong spirit of cooperation and openness.  The Request for Comments (RFCs) series of notes were open for anyone to write and everyone to read.  Anyone was welcome to participate in the design of the protocol, and hence we now have important protocols that have originated from all corners of the world.

In October 1971, two years after the first IMP was installed, we held a meeting at MIT to test the software on all of the hosts.  Researchers at each host attempted to login, via Telnet, to each of the other hosts.  In the spirit of Samuel Johnson’s famous quote[4], the deadline and visibility within the research community stimulated frenetic activity all across the network to get everything working.  Almost all of the hosts were able to login to all of the other hosts.  The Arpanet was finally up and running.  And the bakeoff at MIT that October set the tone for the future: test your software by connecting to others.  No need for formal standards certification or special compliance organizations; the pressure to demonstrate your stuff actually works with others gets the job done.

[1] The SDS 940 had a maximum memory size of 65K 24-bit words. The time-sharing system along with all of its associated drivers and active data had to share this limited memory, so space was precious and all data structures and buffers were kept to the minimum possible size. The original host-to-host protocol called for terminal emulation and single character messages, and buffers were sized accordingly. What had not been anticipated was that in a full duplex system such as the 940, multiple characters might be echoed for a single received character. Such was the case when the G of LOG was echoed back as “GIN” due to the command completion feature of the SDS 940 operating system.

[2] “50,000” is not a misprint. The telephone lines in those days were analog, not digital. To achieve a data rate of 50,000 bits per second, AT&T used twelve voice grade lines bonded together and a Western Electric series 303A modem that spread the data across the twelve lines. Several years later, an ordinary “voice grade” line was implemented with digital technology and could transmit data at 56,000 bits per second, but in the early days of the Arpanet 50Kbs was considered very fast. These lines were also quite expensive.

[3] In the papers that described the Host-Host protocol, the term Network Control Program (NCP) designated the software addition to the operating system that implemented the Host-Host protocol. Over time, the term Host-Host protocol fell into disuse in favor of Network Control Protocol, and the initials “NCP” were repurposed.

[4] Samuel Johnson - ‘Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.’

Monday, 28 October


Supporting the latest version of the Privacy Pass Protocol [The Cloudflare Blog]

Supporting the latest version of the Privacy Pass Protocol
Supporting the latest version of the Privacy Pass Protocol

At Cloudflare, we are committed to supporting and developing new privacy-preserving technologies that benefit all Internet users. In November 2017, we announced server-side support for the Privacy Pass protocol, a piece of work developed in collaboration with the academic community. Privacy Pass, in a nutshell, allows clients to provide proof of trust without revealing where and when the trust was provided. The aim of the protocol is then to allow anyone to prove they are trusted by a server, without that server being able to track the user via the trust that was assigned.

On a technical level, Privacy Pass clients receive attestation tokens from a server, that can then be redeemed in the future. These tokens are provided when a server deems the client to be trusted; for example, after they have logged into a service or if they prove certain characteristics. The redeemed tokens are cryptographically unlinkable to the attestation originally provided by the server, and so they do not reveal anything about the client.

Supporting the latest version of the Privacy Pass Protocol
Supporting the latest version of the Privacy Pass Protocol

To use Privacy Pass, clients can install an open-source browser extension available in Chrome & Firefox. There have been over 150,000 individual downloads of Privacy Pass worldwide; approximately 130,000 in Chrome and more than 20,000 in Firefox. The extension is supported by Cloudflare to make websites more accessible for users. This complements previous work, including the launch of Cloudflare onion services to help improve accessibility for users of the Tor Browser.

The initial release was almost two years ago, and it was followed up with a research publication that was presented at the Privacy Enhancing Technologies Symposium 2018 (winning a Best Student Paper award). Since then, Cloudflare has been working with the wider community to build on the initial design and improve Privacy Pass. We’ll be talking about the work that we have done to develop the existing implementations, alongside the protocol itself.

What’s new?

Support for Privacy Pass v2.0 browser extension:

  • Easier configuration of workflow.
  • Integration with new service provider (hCaptcha).
  • Compliance with hash-to-curve draft.
  • Possible to rotate keys outside of extension release.
  • Available in Chrome and Firefox (works best with up-to-date browser versions).

Rolling out a new server backend using Cloudflare Workers platform:

  • Cryptographic operations performed using internal V8 engine.
  • Provides public redemption API for Cloudflare Privacy Pass v2.0 tokens.
  • Available by making POST requests to See the documentation for example usage.
  • Only compatible with extension v2.0 (check that you have updated!).


  • Continued development of oblivious pseudorandom functions (OPRFs) draft in prime-order groups with CFRG@IRTF.
  • New draft specifying Privacy Pass protocol.

Extension v2.0

In the time since the release, we’ve been working on a number of new features. Today we’re excited to announce support for version 2.0 of the extension, the first update since the original release. The extension continues to be available for Chrome and Firefox. You may need to download v2.0 manually from the store if you have auto-updates disabled in your browser.

The extension remains under active development and we still regard our support as in the beta phase. This will continue to be the case as the draft specification of the protocol continues to be written in collaboration with the wider community.

Supporting the latest version of the Privacy Pass Protocol

New Integrations

The client implementation uses the WebRequest API to look for certain types of HTTP requests. When these requests are spotted, they are rewritten to include some cryptographic data required for the Privacy Pass protocol. This allows Privacy Pass providers receiving this data to authorize access for the user.

For example, a user may receive Privacy Pass tokens for completing some server security checks. These tokens are stored by the browser extension, and any future request that needs similar security clearance can be modified to add a stored token as an extra HTTP header. The server can then check the client token and verify that the client has the correct authorization to proceed.

Supporting the latest version of the Privacy Pass Protocol

While Cloudflare supports a particular type of request flow, it would be impossible to expect different service providers to all abide by the same exact interaction characteristics. One of the major changes in the v2.0 extension has been a technical rewrite to instead use a central configuration file. The config is specified in the source code of the extension and allows easier modification of the browsing characteristics that initiate Privacy Pass actions. This makes adding new, completely different request flows possible by simply cloning and adapting the configuration for new providers.

To demonstrate that such integrations are now possible with other services beyond Cloudflare, a new version of the extension will soon be rolling out that is supported by the CAPTCHA provider hCaptcha. Users that solve ephemeral challenges provided by hCaptcha will receive privacy-preserving tokens that will be redeemable at other hCaptcha customer sites.

Supporting the latest version of the Privacy Pass Protocol
“hCaptcha is focused on user privacy, and supporting Privacy Pass is a natural extension of our work in this area. We look forward to working with Cloudflare and others to make this a common and widely adopted standard, and are currently exploring other applications. Implementing Privacy Pass into our globally distributed service was relatively straightforward, and we have enjoyed working with the Cloudflare team to improve the open source Chrome browser extension in order to deliver the best experience for our users.” - Eli-Shaoul Khedouri, founder of hCaptcha

This hCaptcha integration with the Privacy Pass browser extension acts as a proof-of-concept in establishing support for new services. Any new providers that would like to integrate with the Privacy Pass browser extension can do so simply by making a PR to the open-source repository.

Improved cryptographic functionality

After the release of v1.0 of the extension, there were features that were still unimplemented. These included proper zero-knowledge proof validation for checking that the server was always using the same committed key. In v2.0 this functionality has been completed, verifiably preventing a malicious server from attempting to deanonymize users by using a different key for each request.

The cryptographic operations required for Privacy Pass are performed using elliptic curve cryptography (ECC). The extension currently uses the NIST P-256 curve, for which we have included some optimisations. Firstly, this makes it possible to store elliptic curve points in compressed and uncompressed data formats. This means that browser storage can be reduced by 50%, and that server responses can be made smaller too.

Supporting the latest version of the Privacy Pass Protocol

Secondly, support has been added for hashing to the P-256 curve using the “Simplified Shallue-van de Woestijne-Ulas” (SSWU) method specified in an ongoing draft ( for standardizing encodings for hashing to elliptic curves. The implementation is compliant with the specification of the “P256-SHA256-SSWU-” ciphersuite in this draft.

These changes have a dual advantage, firstly ensuring that the P-256 hash-to-curve specification is compliant with the draft specification. Secondly this ciphersuite removes the necessity for using probabilistic methods, such as hash-and-increment. The hash-and-increment method has a non-negligible chance of failure, and the running time is highly dependent on the hidden client input. While it is not clear how to abuse timing attack vectors currently, using the SSWU method should reduce the potential for attacking the implementation, and learning the client input.

Key rotation

As we mentioned above, verifying that the server is always using the same key is an important part of ensuring the client’s privacy. This ensures that the server cannot segregate the user base and reduce client privacy by using different secret keys for each client that it interacts with. The server guarantees that it’s always using the same key by publishing a commitment to its public key somewhere that the client can access.

Every time the server issues Privacy Pass tokens to the client, it also produces a zero-knowledge proof that it has produced these tokens using the correct key.

Supporting the latest version of the Privacy Pass Protocol

Before the extension stores any tokens, it first verifies the proof against the commitments it knows. Previously, these commitments were stored directly in the source code of the extension. This meant that if the server wanted to rotate its key, then it required releasing a new version of the extension, which was unnecessarily difficult. The extension has been modified so that the commitments are stored in a trusted location that the client can access when it needs to verify the server response. Currently this location is a separate Privacy Pass GitHub repository. For those that are interested, we have provided a more detailed description of the new commitment format in Appendix A at the end of this post.

Supporting the latest version of the Privacy Pass Protocol

Implementing server-side support in Workers

So far we have focused on client-side updates. As part of supporting v2.0 of the extension, we are rolling out some major changes to the server-side support that Cloudflare uses. For version 1.0, we used a Go implementation of the server. In v2.0 we are introducing a new server implementation that runs in the Cloudflare Workers platform. This server implementation is only compatible with v2.0 releases of Privacy Pass, so you may need to update your extension if you have auto-updates turned off in your browser.

Our server will run at, and all Privacy Pass requests sent to the Cloudflare edge are handled by Worker scripts that run on this domain. Our implementation has been rewritten using Javascript, with cryptographic operations running in the V8 engine that powers Cloudflare Workers. This means that we are able to run highly efficient and constant-time cryptographic operations. On top of this, we benefit from the enhanced performance provided by running our code in the Workers Platform, as close to the user as possible.

WebCrypto support

Firstly, you may be asking, how do we manage to implement cryptographic operations in Cloudflare Workers? Currently, support for performing cryptographic operations is provided in the Workers platform via the WebCrypto API. This API allows users to compute functionality such as cryptographic hashing, alongside more complicated operations like ECDSA signatures.

In the Privacy Pass protocol, as we’ll discuss a bit later, the main cryptographic operations are performed by a protocol known as a verifiable oblivious pseudorandom function (VOPRF). Such a protocol allows a client to learn function outputs computed by a server, without revealing to the server what their actual input was. The verifiable aspect means that the server must also prove (in a publicly verifiable way) that the evaluation they pass to the user is correct. Such a function is pseudorandom because the server output is indistinguishable from a random sequence of bytes.

Supporting the latest version of the Privacy Pass Protocol

The VOPRF functionality requires a server to perform low-level ECC operations that are not currently exposed in the WebCrypto API. We balanced out the possible ways of getting around this requirement. First we trialled trying to use the WebCrypto API in a non-standard manner, using EC Diffie-Hellman key exchange as a method for performing the scalar multiplication that we needed. We also tried to implement all operations using pure JavaScript. Unfortunately both methods were unsatisfactory in the sense that they would either mean integrating with large external cryptographic libraries, or they would be far too slow to be used in a performant Internet setting.

In the end, we settled on a solution that adds functions necessary for Privacy Pass to the internal WebCrypto interface in the Cloudflare V8 Javascript engine. This algorithm mimics the sign/verify interface provided by signature algorithms like ECDSA. In short, we use the sign() function to issue Privacy Pass tokens to the client. While verify() can be used by the server to verify data that is redeemed by the client. These functions are implemented directly in the V8 layer and so they are much more performant and secure (running in constant-time, for example) than pure JS alternatives.

The Privacy Pass WebCrypto interface is not currently available for public usage. If it turns out there is enough interest in using this additional algorithm in the Workers platform, then we will consider making it public.


In recent times, VOPRFs have been shown to be a highly useful primitive in establishing many cryptographic tools. Aside from Privacy Pass, they are also essential for constructing password-authenticated key exchange protocols such as OPAQUE. They have also been used in designs of private set intersection, password-protected secret-sharing protocols, and privacy-preserving access-control for private data storage.

Public redemption API

Writing the server in Cloudflare Workers means that we will be providing server-side support for Privacy Pass on a public domain! While we only issue tokens to clients after we are sure that we can trust them, anyone will be able to redeem the tokens using our public redemption API at As we roll-out the server-side component worldwide, you will be able to interact with this API and verify Cloudflare Privacy Pass tokens independently of the browser extension.

Supporting the latest version of the Privacy Pass Protocol

This means that any service can accept Privacy Pass tokens from a client that were issued by Cloudflare, and then verify them with the Cloudflare redemption API. Using the result provided by the API, external services can check whether Cloudflare has authorized the user in the past.

We think that this will benefit other service providers because they can use the attestation of authorization from Cloudflare in their own decision-making processes, without sacrificing the privacy of the client at any stage. We hope that this ecosystem can grow further, with potentially more services providing public redemption APIs of their own. With a more diverse set of issuers, these attestations will become more useful.

By running our server on a public domain, we are effectively a customer of the Cloudflare Workers product. This means that we are also able to make use of Workers KV for protecting against malicious clients. In particular, servers must check that clients are not re-using tokens during the redemption phase. The performance of Workers KV in analyzing reads makes this an obvious choice for providing double-spend protection globally.

If you would like to use the public redemption API, we provide documentation for using it at We also provide some example requests and responses in Appendix B at the end of the post.

Standardization & new applications

In tandem with the recent engineering work that we have been doing on supporting Privacy Pass, we have been collaborating with the wider community in an attempt to standardize both the underlying VOPRF functionality, and the protocol itself. While the process of standardization for oblivious pseudorandom functions (OPRFs) has been running for over a year, the recent efforts to standardize the Privacy Pass protocol have been driven by very recent applications that have come about in the last few months.

Standardizing protocols and functionality is an important way of providing interoperable, secure, and performant interfaces for running protocols on the Internet. This makes it easier for developers to write their own implementations of this complex functionality. The process also provides helpful peer reviews from experts in the community, which can lead to better surfacing of potential security risks that should be mitigated in any implementation. Other benefits include coming to a consensus on the most reliable, scalable and performant protocol designs for all possible applications.

Oblivious pseudorandom functions

Oblivious pseudorandom functions (OPRFs) are a generalization of VOPRFs that do not require the server to prove that they have evaluated the functionality properly. Since July 2019, we have been collaborating on a draft with the Crypto Forum Research Group (CFRG) at the Internet Research Task Force (IRTF) to standardize an OPRF protocol that operates in prime-order groups. This is a generalisation of the setting that is provided by elliptic curves. This is the same VOPRF construction that was originally specified by the Privacy Pass protocol and is based heavily on the original protocol design from the paper of Jarecki, Kiayias and Krawczyk.

One of the recent changes that we've made in the draft is to increase the size of the key that we consider for performing OPRF operations on the server-side. Existing research suggests that it is possible to create specific queries that can lead to small amounts of the key being leaked. For keys that provide only 128 bits of security this can be a problem as leaking too many bits would reduce security beyond currently accepted levels. To counter this, we have effectively increased the minimum key size to 192 bits. This prevents this leakage becoming an attack vector using any practical methods. We discuss these attacks in more detail later on when discussing our future plans for VOPRF development.

Recent applications and standardizing the protocol

The application that we demonstrated when originally supporting Privacy Pass was always intended as a proof-of-concept for the protocol. Over the past few months, a number of new possibilities have arisen in areas that go far beyond what was previously envisaged.

Supporting the latest version of the Privacy Pass Protocol

For example, the trust token API, developed by the Web Incubator Community Group, has been proposed as an interface for using Privacy Pass. This applications allows third-party vendors to check that a user has received a trust attestation from a set of central issuers. This allows the vendor to make decisions about the honesty of a client without having to associate a behaviour profile with the identity of the user. The objective is to prevent against fraudulent activity from users who are not trusted by the central issuer set. Checking trust attestations with central issuers would be possible using similar redemption APIs to the one that we have introduced.

A separate piece of work from Facebook details a similar application for preventing fraudulent behavior that may also be compatible with the Privacy Pass protocol. Finally, other applications have arisen in the areas of providing access to private storage and establishing security and privacy models in advertisement confirmations.

A new draft

With the applications above in mind, we have recently started collaborative work on a new IETF draft that specifically lays out the required functionality provided by the Privacy Pass protocol as a whole. Our aim is to develop, alongside wider industrial partners and the academic community, a functioning specification of the Privacy Pass protocol. We hope that by doing this we will be able to design a base-layer protocol, that can then be used as a cryptographic primitive in wider applications that require some form of lightweight authorization. Our plan is to present the first version of this draft at the upcoming IETF 106 meeting in Singapore next month.

The draft is still in the early stages of development and we are actively looking for people who are interested in helping to shape the protocol specification. We would be grateful for any help that contributes to this process. See the GitHub repository for the current version of the document.

Future avenues

Finally, while we are actively working on a number of different pathways in the present, the future directions for the project are still open. We believe that there are many applications out there that we have not considered yet and we are excited to see where the protocol is used in the future. Here are some other ideas we have for novel applications and security properties that we think might be worth pursuing in future.

Publicly verifiable tokens

One of the disadvantages of using a VOPRF is that redemption tokens are only verifiable by the original issuing server. If we used an underlying primitive that allowed public verification of redemption tokens, then anyone could verify that the issuing server had issued the particular token. Such a protocol could be constructed on top of so-called blind signature schemes, such as Blind RSA. Unfortunately, there are performance and security concerns arising from the usage of blind signature schemes in a browser environment. Existing schemes (especially RSA-based variants) require cryptographic computations that are much heavier than the construction used in our VOPRF protocol.

Post-quantum VOPRF alternatives

The only constructions of VOPRFs exist in pre-quantum settings, usually based on the hardness of well-known problems in group settings such as the discrete-log assumption. No constructions of VOPRFs are known to provide security against adversaries that can run quantum computational algorithms. This means that the Privacy Pass protocol is only believed to be secure against adversaries running  on classical hardware.

Recent developments suggest that quantum computing may arrive sooner than previously thought. As such, we believe that investigating the possibility of constructing practical post-quantum alternatives for our current cryptographic toolkit is a task of great importance for ourselves and the wider community. In this case, devising performant post-quantum alternatives for VOPRF constructions would be an important theoretical advancement. Eventually this would lead to a Privacy Pass protocol that still provides privacy-preserving authorization in a post-quantum world.

VOPRF security and larger ciphersuites

We mentioned previously that VOPRFs (or simply OPRFs) are susceptible to small amounts of possible leakage in the key. Here we will give a brief description of the actual attacks themselves, along with further details on our plans for implementing higher security ciphersuites to mitigate the leakage.

Specifically, malicious clients can interact with a VOPRF for creating something known as a q-Strong-Diffie-Hellman (q-sDH) sample. Such samples are created in mathematical groups (usually in the elliptic curve setting). For any group there is a public element g that is central to all Diffie-Hellman type operations, along with the server key K, which is usually just interpreted as a randomly generated number from this group. A q-sDH sample takes the form:

( g, g^K, g^(K^2), … , g^(K^q) )

and asks the malicious adversary to create a pair of elements satisfying (g^(1/(s+K)),s). It is possible for a client in the VOPRF protocol to create a q-SDH sample by just submitting the result of the previous VOPRF evaluation back to the server.

While this problem is believed to be hard to break, there are a number of past works that show that the problem is somewhat easier than the size of the group suggests (for example, see here and here). Concretely speaking, the bit security implied by the group can be reduced by up to log2(q) bits. While this is not immediately fatal, even to groups that should provide 128 bits of security, it can lead to a loss of security that means that the setting is no longer future-proof. As a result, any group providing VOPRF functionality that is instantiated using an elliptic curve such as P-256 or Curve25519 provides weaker than advised security guarantees.

With this in mind, we have taken the recent decision to upgrade the ciphersuites that we recommend for OPRF usage to only those that provide > 128 bits of security, as standard. For example, Curve448 provides 192 bits of security. To launch an attack that reduced security to an amount lower than 128 bits would require making 2^(68) client OPRF queries. This is a significant barrier to entry for any attacker, and so we regard these ciphersuites as safe for instantiating the OPRF functionality.

In the near future, it will be necessary to upgrade the ciphersuites that are used in our support of the Privacy Pass browser extension to the recommendations made in the current VOPRF draft. In general, with a more iterative release process, we hope that the Privacy Pass implementation will be able to follow the current draft standard more closely as it evolves during the standardization process.

Get in touch!

You can now install v2.0 of the Privacy Pass extension in Chrome or Firefox.

If you would like to help contribute to the development of this extension then you can do so on GitHub. Are you a service provider that would like to integrate server-side support for the extension? Then we would be very interested in hearing from you!

We will continue to work with the wider community in developing the standardization of the protocol; taking our motivation from the available applications that have been developed. We are always looking for new applications that can help to expand the Privacy Pass ecosystem beyond its current boundaries.

Supporting the latest version of the Privacy Pass Protocol


Here are some extra details related to the topics that we covered above.

A. Commitment format for key rotations

Key commitments are necessary for the server to prove that they’re acting honestly during the Privacy Pass protocol. The commitments that Privacy Pass uses for the v2.0 release have a slightly different format from the previous release.

"2.00": {
  "H": "BPivZ+bqrAZzBHZtROY72/E4UGVKAanNoHL1Oteg25oTPRUkrYeVcYGfkOr425NzWOTLRfmB8cgnlUfAeN2Ikmg=",
  "expiry": "2020-01-11T10:29:10.658286752Z",
  "sig": "MEUCIQDu9xeF1q89bQuIMtGm0g8KS2srOPv+4hHjMWNVzJ92kAIgYrDKNkg3GRs9Jq5bkE/4mM7/QZInAVvwmIyg6lQZGE0="

First, the version of the server key is 2.00, the server must inform the client which version it intends to use in the response to a client containing issued tokens. This is so that the client can always use the correct commitments when verifying the zero-knowledge proof that the server sends.

The value of the member H is the public key commitment to the secret key used by the server. This is base64-encoded elliptic curve point of the form H=kG where G is the fixed generator of the curve, and k is the secret key of the server. Since the discrete-log problem is believed to be hard to solve, deriving k from H is believed to be difficult. The value of the member expiry is an expiry date for the commitment that is used. The value of the member sig is an ECDSA signature evaluated using a long-term signing key associated with the server, and over the values of H and expiry.

When a client retrieves the commitment, it checks that it hasn’t expired and that the signature verifies using the corresponding verification key that is embedded into the configuration of the extension. If these checks pass, it retrieves H and verifies the issuance response sent by the server. Previous versions of these commitments did not include signatures, but these signatures will be validated from v2.0 onwards.

When a server wants to rotate the key, it simply generates a new key k2 and appends a new commitment to k2 with a new identifier such as 2.01. It can then use k2 as the secret for the VOPRF operations that it needs to compute.

B. Example Redemption API request

The redemption API at is available over HTTPS by sending POST requests to Requests to this endpoint must specify Privacy Pass data using JSON-RPC 2.0 syntax in the body of the request. Let’s look at an example request:

  "jsonrpc": "2.0",
  "method": "redeem",
  "params": {
    "data": [
    "bindings": [
  "id": 1

In the above:[0] is the client input data used to generate a token in the issuance phase;[1] is the HMAC tag that the server uses to verify a redemption; and[2] is a stringified, base64-encoded JSON object that specifies the hash-to-curve parameters used by the client. For example, the last element in the array corresponds to the object:

    curve: "p256",
    hash: "sha256",
    method: "swu",

Which specifies that the client has used the curve P-256, with hash function SHA-256, and the SSWU method for hashing to curve. This allows the server to verify the transaction with the correct ciphersuite. The client must bind the redemption request to some fixed information, which it stores as multiple strings in the array params.bindings. For example, it could send the Host header of the HTTP request, and the HTTP path that was used (this is what is used in the Privacy Pass browser extension). Finally, params.compressed is an optional boolean value (defaulting to false) that indicates whether the HMAC tag was computed over compressed or uncompressed point encodings.

Currently the only supported ciphersuites are the example above, or the same except with method equal to increment for the hash-and-increment method of hashing to a curve. This is the original method used in v1.0 of Privacy Pass, and is supported for backwards-compatibility only. See the provided documentation for more details.

Example response

If a request is sent to the redemption API and it is successfully verified, then the following response will be returned.

  "jsonrpc": "2.0",
  "result": "success",
  "id": 1

When an error occurs something similar to the following will be returned.

  "jsonrpc": "2.0",
  "error": {
    "message": <error-message>,
    "code": <error-code>,
  "id": 1

The error codes that we provide are specified as JSON-RPC 2.0 codes, we document the types of errors that we provide in the API documentation.


Saturday Morning Breakfast Cereal - Space [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

There is no intuitive way to understand doinkspace. We simply have to trust our calculations.

Today's News:

Launch tomorrow!


Build a virtual private network with Wireguard [Fedora Magazine]

Wireguard is a new VPN designed as a replacement for IPSec and OpenVPN. Its design goal is to be simple and secure, and it takes advantage of recent technologies such as the Noise Protocol Framework. Some consider Wireguard’s ease of configuration akin to OpenSSH. This article shows you how to deploy and use it.

It is currently in active development, so it might not be the best for production machines. However, Wireguard is under consideration to be included into the Linux kernel. The design has been formally verified,* and proven to be secure against a number of threats.

When deploying Wireguard, keep your Fedora Linux system updated to the most recent version, since Wireguard does not have a stable release cadence.

Set the timezone

To check and set your timezone, first display current time information:


Then if needed, set the correct timezone, for example to Europe/London.

timedatectl set-timezone Europe/London

Note that your system’s real time clock (RTC) may continue to be set to UTC or another timezone.

Install Wireguard

To install, enable the COPR repository for the project and then install with dnf, using sudo:

$ sudo dnf copr enable jdoss/wireguard
$ sudo dnf install wireguard-dkms wireguard-tools

Once installed, two new commands become available, along with support for systemd:

  • wg: Configuration of wireguard interfaces
  • wg-quick Bringing up the VPN tunnels

Create the configuration directory for Wireguard, and apply a umask of 077. A umask of 077 allows read, write, and execute permission for the file’s owner (root), but prohibits read, write, and execute permission for everyone else.

mkdir /etc/wireguard
cd /etc/wireguard
umask 077

Generate Key Pairs

Generate the private key, then derive the public key from it.

$ wg genkey > /etc/wireguard/privkey
$ wg pubkey < /etc/wireguard/privkey > /etc/wireguard/publickey

Alternatively, this can be done in one go:

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

There is a vanity address generator, which might be of interest to some. You can also generate a pre-shared key to provide a level of quantum protection:

wg genpsk > psk

This will be the same value for both the server and client, so you only need to run the command once.

Configure Wireguard server and client

Both the client and server have an [Interface] option to specify the IP address assigned to the interface, along with the private keys.

Each peer (server and client) has a [Peer] section containing its respective PublicKey, along with the PresharedKey. Additionally, this block can list allowed IP addresses which can use the tunnel.


A firewall rule is added when the interface is brought up, along with enabling masquerading. Make sure to note the /24 IPv4 address range within Interface, which differs from the client. Edit the /etc/wireguard/wg0.conf file as follows, using the IP address for your server for Address, and the client IP address in AllowedIPs.

Address    =, fd00:7::1/48
PostUp     = firewall-cmd --zone=public --add-port 51820/udp && firewall-cmd --zone=public --add-masquerade
PostDown   = firewall-cmd --zone=public --remove-port 51820/udp && firewall-cmd --zone=public --remove-masquerade
ListenPort = 51820

PublicKey    = <CLIENT_PUBLIC_KEY>
PresharedKey = LpI+UivLx1ZqbzjyRaWR2rWN20tbBsOroNdNnjKLMQ=
AllowedIPs   =, fd00:7::2/48

Allow forwarding of IP packets by adding the following to /etc/sysctl.conf:


Load the new settings:

$ sysctl -p

Forwarding will be preserved after a reboot.


The client is very similar to the server config, but has an optional additional entry of PersistentKeepalive set to 30 seconds. This is to prevent NAT from causing issues, and depending on your setup might not be needed. Setting AllowedIPs to will forward all traffic over the tunnel. Edit the client’s /etc/wireguard/wg0.conf file as follows, using your client’s IP address for Address and the server IP address at the Endpoint.

Address    =, fd00:7::2/48

PublicKey    = <SERVER_PUBLIC_KEY>
PresharedKey = LpI+UivLx1ZqbzjyRaWR2rWN20tbBsOroNdNnjWKLM=
AllowedIPs   =, ::/0

Endpoint     = <SERVER_IP>:51820
PersistentKeepalive = 30

Test Wireguard

Start and check the status of the tunnel on both the server and client:

$ systemctl start wg-quick@wg0
$ systemctl status wg-quick@wg0

To test the connections, try the following:


Then check external IP addresses:

dig +short
dig +short -6 aaaa

* “Formally verified,” in this sense, means that the design has been proved to have mathematically correct messages and key secrecy, forward secrecy, mutual authentication, session uniqueness, channel binding, and resistance against replay, key compromise impersonation, and denial of server attacks.

Photo by Black Zheng on Unsplash.

Sunday, 27 October


Beyond Labels: Stories of Asian Pacific Islanders at Yelp [Yelp Engineering and Product Blog]

During Asian Pacific American Heritage Month, ColorCoded (a Yelp employee resource group) hosted a panel discussion called “Beyond Labels: Stories of Asian Pacific Islanders (API)* at Yelp.” We heard stories from five API Yelpers about their cultural backgrounds, identities, and thoughts on what it means to be an API in today’s world. Their stories helped us understand that identity is both multilayered and contextual, and that individuality goes beyond labels. Read more from their unique perspectives below. Tenzin Kunsal, Events + Partnerships, Engineering Recruiting From a young age, I knew the concept of “home” was complicated. Like many refugees, my...


Tales from the Crypt(o team) [The Cloudflare Blog]

Tales from the Crypt(o team)
Tales from the Crypt(o team)

Halloween season is upon us. This week we’re sharing a series of blog posts about work being done at Cloudflare involving cryptography, one of the spookiest technologies around. So subscribe to this blog and come back every day for tricks, treats, and deep technical content.

A long-term mission

Cryptography is one of the most powerful technological tools we have, and Cloudflare has been at the forefront of using cryptography to help build a better Internet. Of course, we haven’t been alone on this journey. Making meaningful changes to the way the Internet works requires time, effort, experimentation, momentum, and willing partners. Cloudflare has been involved with several multi-year efforts to leverage cryptography to help make the Internet better.

Here are some highlights to expect this week:

The milestones we’re sharing this week would not be possible without partnerships with companies, universities, and individuals working in good faith to help build a better Internet together. Hopefully, this week provides a fun peek into the future of the Internet.


Saturday Morning Breakfast Cereal - They Walk Among Us [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

One day, I need to release a whole book of comics about humans winning by being gross.

Today's News:

Saturday, 26 October


Saturday Morning Breakfast Cereal - Hoax [Saturday Morning Breakfast Cereal]

Click here to go see the bonus panel!

The alternative view is that we should stop faking so many achievements in space and start faking care of the people down here.

Today's News: