Sunday, 23 February

13:45

Oracle's Allies Against Google Include Scott McNealy and America's Justice Department [Slashdot]

America's Justice Department "has filed a brief in support of Oracle in its Supreme Court battle against Google over whether Java should have copyright protection," reports ZDNet: The Justice Department filed its amicus brief to the Supreme Court this week, joining a mighty list of briefs from major tech companies and industry luminaries — including Scott McNealy, co-founder of Sun, which Oracle bought in 2010, acquiring Sun-built Java in the process. While Microsoft, IBM and others have backed Google's arguments in the decade-long battle, McNealy, like the Justice Department, is opposing Google. McNealy called Google's description of how it uses Java packages a "woeful mischaracterization of the artful design of the Java packages" and "an insult to the hard-working developers at Sun who made Java such a success...." Joe Tucci, former CEO of now Dell-owned enterprise storage giant EMC, threw in his two cents against Google. "Accepting Google's invitation to upend that system by eliminating copyright protection for creative and original computer software code would not make the system better — it would instead have sweeping and harmful effects throughout the software industry," Tucci's brief reads. Oracle is also questioning the motives of Google's allies, reports The Verge: After filing a Supreme Court statement last week, Oracle VP Ken Glueck posted a statement over the weekend assailing the motives of Microsoft, IBM, and the CCIA industry group, all of which have publicly supported Google. Glueck's post comes shortly after two groups — an interdisciplinary panel of academics and the American Conservative Union Foundation — submitted legal briefs supporting Oracle. Both groups argued that Google should be liable for copying code from the Java language for the Android operating system. The ACUF argued that protecting Oracle's code "is fundamental to a well-ordered system of private property rights and indeed the rule of law itself...." Earlier this year, Google garnered around two dozen briefs supporting its position. But Oracle claims that in reality, "Google appears to be virtually alone — at least among the technology community." Glueck says Google's most prominent backers had ulterior motives or "parochial agendas"; either they were working closely with Google, or they had their own designs on Java... Even if you accept Oracle's arguments wholeheartedly, there's a long list of other Google backers from the tech community. Advocacy groups like the Electronic Frontier Foundation and the Center for Democracy and Technology signed on to amicus briefs last month, as did several prominent tech pioneers, including Linux creator Linus Torvalds and Apple cofounder Steve Wozniak. The CCIA brief was signed by the Internet Association, a trade group representing many of the biggest companies in Silicon Valley. Patreon, Reddit, Etsy, the Mozilla Corporation, and other midsized tech companies also backed a brief raising "fundamental concerns" about Oracle's assertions.

Read more of this story at Slashdot.

12:35

Would Star Trek's Transporters Kill and Replace You? [Slashdot]

schwit1 quotes Syfy Wire: There is, admittedly, some ambiguity about precisely how Trek's transporters work. The events of some episodes subtly contradict events in others. The closest thing to an official word we have is the Star Trek: The Next Generation Technical Manual, which states that when a person enters a transporter, they are scanned by molecular imaging scanners that convert a person into a subatomically deconstructed matter stream. That's all a fancy-pants way of saying it takes you apart, atom by atom, and converts your matter into energy. That energy can then be beamed to its destination, where it's reconstructed. According to Trek lore, we're meant to believe this is a continuous process. Despite being deconstructed and rebuilt on the other end, you never stop being "you...." [Alternately] the fact that you are scanned, deconstructed, and rebuilt almost immediately thereafter only creates the illusion of continuity. In reality, you are killed and then something exactly like you is born, elsewhere. If the person constructed on the other end is identical to you, down to the atomic level, is there any measurable difference from it being actually you? Those are questions we can't begin to answer. What seems clear — whatever the technical manual says — is you die when you enter a transporter, however briefly. The article also cites estimates that it would take three gigajoules of energy (about one bolt of lightning) to disassemble somebody's atoms, and 10 to the 28th power kilobytes to then hold all that information -- and 2.6 tredecillion bits of data to transmit it. "The estimated time to transmit, using the standard 30 GHz microwave band used by communications satellites, would take 350,000 times longer than the age of the universe."

Read more of this story at Slashdot.

11:34

Safari Will Stop Trusting Certs Older Than 13 Months [Slashdot]

"Safari will, later this year, no longer accept new HTTPS certificates that expire more than 13 months from their creation date..." writes the Register. Long-time Slashdot reader nimbius shares their report: The policy was unveiled by the iGiant at a Certification Authority Browser Forum (CA/Browser) meeting on Wednesday. Specifically, according to those present at the confab, from September 1, any new website cert valid for more than 398 days will not be trusted by the Safari browser and instead rejected. Older certs, issued prior to the deadline, are unaffected by this rule. By implementing the policy in Safari, Apple will, by extension, enforce it on all iOS and macOS devices. This will put pressure on website admins and developers to make sure their certs meet Apple's requirements — or risk breaking pages on a billion-plus devices and computers... The aim of the move is to improve website security by making sure devs use certs with the latest cryptographic standards, and to reduce the number of old, neglected certificates that could potentially be stolen and re-used for phishing and drive-by malware attacks... We note Let's Encrypt issues free HTTPS certificates that expire after 90 days, and provides tools to automate renewals.

Read more of this story at Slashdot.

11:19

C-SKY CPU Architecture For Linux 5.6 Picks Up Stack Protector, PCI Support [Phoronix]

While two weeks past the Linux 5.6 merge window some late changes for the C-SKY CPU architecture were accepted today...

10:34

Flat-Earth Daredevil Mad Mike Hughes Dies in Homemade Rocket Launch [Slashdot]

"He was working on a TV show, Homemade Astronauts, when his craft crashed in the California desert," reports NBC. Four different Slashdot readers shared the news. NBC News reports: Daredevil "Mad" Mike Hughes died Saturday when a homemade rocket he was attached to launched but quickly dove to earth in the California desert. The stunt was apparently part of a forthcoming television show, "Homemade Astronauts," that was scheduled to debut later this year on Discovery Inc.'s Science Channel. Discovery confirmed the 64-year-old's death in a statement. "It was always his dream to do this launch, and Science Channel was there to chronicle his journey," the company said... In 2018, he successfully launched himself about 1,875 feet into the sky above the Mojave desert via a garage-made rocket. His landing that year was softened when he deployed a parachute. In social media video of Saturday's accident, a parachute-like swath of fabric can be seen flying away from the rocket shortly after blast-off.

Read more of this story at Slashdot.

09:34

American Lawmakers Launch Investigations Into Ring's Police Deals [Slashdot]

A U.S. Congressional subcommittee is now "pursuing a deeper understanding of how Ring's partnerships with local and state law enforcement agencies mesh with the constitutional protections Americans enjoy against unbridled police surveillance," reports Gizmodo: Rep. Raja Krishnamoorthi, chairman of the House Oversight and Reform subcommittee on economic and consumer policy, is seeking to learn why, in more than 700 jurisdictions, police have signed contracts that surrender control over what city officials can say publicly about the Amazon-owned company... "In one instance, Ring is reported to have edited a police department's press release to remove the word 'surveillance,'" the letter says, citing a Gizmodo report from last fall. But that's just the beginning, reports Ars Technica: Congress wants a list of every police deal Ring actually has, the House Subcommittee on Economic and Consumer Policy wrote in a letter (PDF) dated February 19. After that, the Subcommittee wants to know... well, basically everything. The request for information asks for documentation relating to "all instances in which a law enforcement agency has requested video footage from Ring," as well as full lists of all third-party firms that get any access to Ring users' personal information or video footage. Ring is also asked to send over copies of every privacy notice, terms of service, and law enforcement guideline it has ever had, as well as materials relating to its marketing practices and any potential future use of facial recognition. And last but not least, the letter requests, "All documents that Ring or Amazon has produced to state attorneys general, the Federal Trade Commission, the Department of Justice, or Congress in response to investigations into Ring...." The company in the fall pulled together a feel-good promotional video comprising images of children ringing Ring doorbells to trick-or-treat on Halloween. It is unclear if Ring sought consent to use any of the clearly visible images of the children or their parents shown in that video... Ring has also faced pressure to describe its plans for future integration of facial recognition systems into its devices. While the company has stated repeatedly that it has no such integration, documents and video promotional materials obtained by reporters in the past several months show that the company is strongly looking into it for future iterations of the system... The House letter gives Amazon a deadline of March 4 to respond with all the requested documentation. Amazon responded by cutting the price of a Ring doorbell camera by $31 -- and offering to also throw in one of Amazon's Alexa-enabled "Echo Dot" smart speakers for free.

Read more of this story at Slashdot.

09:30

Reiser5 Spun Up For The Linux 5.5.5 Kernel [Phoronix]

For those that have been wanting to take the experimental Reiser5 for a test drive since being announced at the end of 2019, new versions of the Reiser4 and Reiser5 file-system kernel patches have been posted...

09:15

Saturday Morning Breakfast Cereal - Craproot [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you want good food, find people who've been poor a long time, but have plenty of salt, sugar, and fat.


Today's News:

08:34

FizzBuzz 2.0: Pragmatic Programming Questions For Software Engineers [Slashdot]

A former YC partner co-founded a recruiting company for technical hiring, and one of its software engineers is long-time Slashdot reader compumike. He now writes: Like the decade-old Fizz Buzz Test, there are some questions that are trivial for anyone who can build software at a professional level, but are likely to stump anyone who can't hack it. I analyzed the data from over 100,000 programmers to reveal how five multiple-choice questions easily separate the real software engineers from the rest. The questions (and the data about correct answers) come from Triplebyte's own coder-recruiting quiz, and "98% of successful engineers answer at least 4 of 5 correctly," explains Mike's article. ("Successful" engineers are defined as those who went on to receive an inbound message from a company matching their preferences through Triplebyte's platform.) "I'm confident that if you're an engineering manager running an interview, you wouldn't give an offer to someone who performed below that line." Question 1: What kind of SQL statement retrieves data from a table? LOOKUPREADFETCHSELECT

Read more of this story at Slashdot.

07:37

GNU Project Publishes Outline Of Its Structure & Administration [Phoronix]

As part of clearing up the relationship between the FSF and GNU and seeking to add more clarity to the GNU Project, Richard Stallman has announced a document outlining the structure and administration of the project...

07:34

How Peloton Bricked the Screens On Flywheel's Stationary Bikes [Slashdot]

DevNull127 writes: Let me get this straight. Peloton's main product is a stationary bicycle costing over $2,000 with a built-in touchscreen for streaming exercise classes. ("A front facing camera and microphone mean you can interact with friends and encourage one another while you ride," explained the Kickstarter campaign which helped launch the company in 2013, with 297 backers pledging $307,332.) Soon after they went public last summer, Bloomberg began calling them "the unprofitable fitness company whose stock has been skidding," adding "The company is working on a new treadmill that will cost less than the current $4,000 model, as well as a rowing machine." Last March they were also sued for $150 million for using music in workout videos without proper licensing, according to the Verge — which notes that the company was then valued at $4 billion. And then this week Vice reported on what happened to one of their competitors. "Flywheel offered both in-studio and in-home stationary bike classes similar to Peloton. Peloton sued Flywheel for technology theft, claiming Flywheel's in-home bikes were too similar to Peloton's. Flywheel settled out of court and, as part of that settlement, it's pointing people to Peloton who is promising to replace the $2,000 Flywheel bikes with refurbished Pelotons... When Peloton delivers these replacement bikes, it'll also haul away the old Flywheels." The Verge reports that one Flywheel customer who'd been enjoying her bike since 2017 "received an email from Peloton, not Flywheel, informing her that her $1,999 bike would no longer function by the end of next month." "It wasn't like Flywheel gave us any option if you decide not to take the Peloton," she says. "Basically it was like: take it or lose your money. They didn't even attempt to fix it with their loyal riders. It felt like a sting."

Read more of this story at Slashdot.

05:44

Weekend Discussion: How Concerned Are You If Your CPU Is Completely Open? [Phoronix]

For some interesting Sunday debates in the forums, how important to you is having a completely open CPU design? Additionally, is POWER dead? This comes following interesting remarks by an industry leader this weekend...

05:34

Signing Up With Amazon, Wal-Mart, Or Uber Forfeits Your Right To Sue Them [Slashdot]

Long-time Slashdot reader DogDude shared this article from CNN: Tucked into the sign-up process for many popular e-commerce sites and apps are dense terms-of-service agreements that legal experts say are changing the nature of consumer transactions, creating a veil of secrecy around how these companies function. The small print in these documents requires all signatories to agree to binding arbitration and to clauses that ban class actions. Just by signing up for these services, consumers give up their rights to sue companies like Amazon, Uber and Walmart before a jury of their peers, agreeing instead to undertake a private process overseen by a paid arbitrator... The proliferation of apps and e-commerce means that such clauses now cover millions of everyday commercial transactions, from buying groceries to getting to the airport... Consumers are "losing access to the courthouse," said Imre Szalai, a law professor at Loyola University New Orleans.

Read more of this story at Slashdot.

05:09

KDE Saw Many Bug Fixes This Week From KWin Crashes To Plasma Wayland Improvements [Phoronix]

This week in particular saw a lot of fixes in the KDE space for a wide variety of bugs...

01:34

Will Low-Code and No-Code Platforms Revolutionize Programming? [Slashdot]

In a new article in Forbes, a Business Technology professor at the Villanova School of Business argues that the way we build software applications is changing: If you're living in the 21st century you turn to your cloud provider for help where many of the most powerful technologies are now offered as-a-service. When your requirements cannot be completely fulfilled from cloud offerings, you build something. But what does "building" mean? What does "programming" mean...? You can program from scratch. You can go to Github (where you can find code of all flavors). Or you can — if you're a little lazier — turn to low-code or no-code programming platforms to develop your applications. All of this falls under the umbrella of what, the Gartner Group defines as the "democratization of expertise": "Democratization is focused on providing people with access to technical expertise (for example, ML, application development) or business domain expertise (for example, sales process, economic analysis) via a radically simplified experience and without requiring extensive and costly training...." [T]he new repositories, platforms and tools are enabling a whole new set of what we used to call "programming." As Satya Nadella said, "Every business will become a software business, build applications, use advanced analytics and provide SAAS services," and as Sajjad Daya says so well in Hackernoon, "Coding takes too long for it to be both profitable and competitively priced. That's not the case with no-code platforms, though. The platforms do the complicated programming automatically, slashing development time..." The technology democracy has forever changed corporate strategy. And what does this mean? It means that the technical team scales on cue. But "technical" means competencies around Github, low-code/no-code platforms and especially business domains... [A]ll of this levels the technology playing field among companies — so long as they understand the skills and competencies they need.

Read more of this story at Slashdot.

Saturday, 22 February

22:34

Nonprofit Argues Germany Can't Ratify the 'Unitary Patent' Because of Brexit [Slashdot]

Long-time Slashdot reader zoobab shares this update from the Foundation for a Free Information Infrastructure, a Munich-based non-profit opposing ratification of a "Unified Patent Court" by Germany. They argue such a court will "validate and expand software patents in Europe," and they've come up with a novel argument to stop it. "Germany cannot ratify the current Unitary Patent due to Brexit..." The U.K. is now a "third state" within the meaning of AETR case-law, [which] makes clear that: "Each time the Community, with a view to implementing a common policy envisaged by the Treaty, adopts provisions laying down common rules, whatever form they may take, the Member States no longer have the right, acting individually or even collectively, to undertake obligations with third countries which affect those rules or alter their scope..." This practically means that the ratification procedure for the Agreement on the Unified Patent Court must now come to an end, as that Agreement no longer applies due to the current significant changes (i.e. Brexit) in the membership requirements of its own ratification rules. The nonprofit also argues that the Unitary Patent "is a highly controversial and extreme issue, as it allows new international patent courts to have the last word on the development and application of patent law and industrial property monopolies including, more seriously, the validation and expansion of software patents, that is the key sector on which whole industries and markets depend."

Read more of this story at Slashdot.

19:34

Russian Trolls Now Just Push Divisive Content Created By Others [Slashdot]

"Americans don't need Russia's polarizing influence operations. They are plenty good enough at dividing themselves," writes the Atlantic's national security reporter, arguing that "the new face of Russian propaganda" is just a carefully-curated selection of inflammatory content made by Americans themselves. Citing the Mueller investigation, the article notes the irony that America's two front-runners for the presidency are now "both candidates Russian trolls sought to promote in 2016," calling them "far apart ideologically but nearly equally suited to the Kremlin's interests, both in being divisive at home and in encouraging U.S. restraint abroad." In 2016, the Kremlin invested heavily in creating memes and Facebook ads designed to stoke Americans' distrust of the electoral system and one another... The Russian government is still interfering, but it doesn't need to do much creative work anymore... Americans are now the chief suppliers of the material that suspected Russia-linked accounts use to stoke anger ahead of U.S. elections, leaving Russia free to focus on pushing it as far as possible. Darren Linvill, a Clemson University professor who has studied Russian information operations, has seen Russian trolls shift tactics to become "curators more than creators," with the same goal of driving Americans apart. "The Russians love those videos," he said, "because they function to make us more disgusted with one another...." [The article cites actions by Russia's "Internet Research Agency" in America's 2018 elections.] The organization was still creating memes, and it got an even bigger budget, according to Graham Brookie, the director of the Digital Forensic Research Lab at the Atlantic Council think tank. But it also began using more of what Americans themselves were putting on the internet, seizing on divisive debates about immigration, gun control, and police shootings of unarmed black men, using real news stories to highlight genuine anger and dysfunction in American politics... Russian trolls can largely just watch Americans fight among themselves, and use fictitious Twitter personas to offer vigorous encouragement... They will keep prodding the same bruises in American society, or encouraging cries of electoral fraud if there's a contested Democratic primary or a tight general election. Alina Polyakova, the president and CEO of the Center for European Policy Analysis, tells the Atlantic that "a U.S. that's mired in its own domestic problems and not engaged in the world benefits Moscow."

Read more of this story at Slashdot.

17:42

Linux's FSCRYPT Working On Encryption + Case-Insensitive Support [Phoronix]

FSCRYPT as the file-system encryption framework for the Linux kernel and is currently wired up for EXT4, F2FS, and UBIFS to offer native encryption capabilities is currently seeing improvements so the separate casefolding (case-insensitive) file/folder support can work on encrypted directories...

17:34

Amazon Is Collecting Donations For a Scientology-Linked Anti-Drug Charity [Slashdot]

An anonymous reader quotes the Guardian: Amazon has agreed to channel funds to a controversial drug rehabilitation charity linked to the Church of Scientology, the Guardian has learned. The web giant will make donations to Narconon — which runs programmes for drug addicts based on the teachings of the Scientology founder, L Ron Hubbard — when supporters buy products through the site, with shoppers able to pledge 0.5% of purchases to selected charities under Amazon's "Smile" feature... Experts have warned the charity's methods have no scientific basis and its link to Scientology has prompted criticism that it is a front used to convert people to the religion, which some former devotees have labelled a cult... The Guardian discovered that Amazon US allows shoppers to donate funds to more than a dozen Narconon-related charities, including its international branch based in Clearwater, Florida, near Scientology's "spiritual headquarters". "The Narconon treatment invokes concepts of residual drug in body and brain which have no scientific validity," complains professor David Nutt, who formerly chaired the government's Advisory Council on the Misuse of Drugs. And he also tells the Guardian Narconon's anti-drug talks in schools aren't "truly scientifically or evidence based. "Sadly we have known for years that Scientology is the main provider of 'teaching' materials on addiction to schools, as the UK government doesn't fund any alternative sources."

Read more of this story at Slashdot.

16:34

Are APIs Putting Financial Data At Risk? [Slashdot]

We live in a world where billions of login credentials have been stolen, enabling the brute-force cyberattacks known as "credential stuffing", reports CSO Online. And it's being made easier by APIs: New data from security and content delivery company Akamai shows that one in every five attempts to gain unauthorized access to user accounts is now done through application programming interfaces (APIs) instead of user-facing login pages. According to a report released today, between December 2017 and November 2019, Akamai observed 85.4 billion credential abuse attacks against companies worldwide that use its services. Of those attacks, around 16.5 billion, or nearly 20%, targeted hostnames that were clearly identified as API endpoints. However, in the financial industry, the percentage of attacks that targeted APIs rose sharply between May and September 2019, at times reaching 75%. "API usage and widespread adoption have enabled criminals to automate their attacks," the company said in its report. "This is why the volume of credential stuffing incidents has continued to grow year over year, and why such attacks remain a steady and constant risk across all market segments." APIs also make it easier to extract information automatically, the article notes, while security experts "have long expressed concerns that implementation errors in banking APIs and the lack of a common development standard could increase the risk of data breaches." Yet the EU's "Payment Services Directive" included a push for third-party interoperability among financial institutions, so "most banks started implementing such APIs... Even if no similar regulatory requirements exist in non-EU countries, market forces are pushing financial institutions in the same direction since they need to innovate and keep up with the competition."

Read more of this story at Slashdot.

15:34

City Sues Drug Manufacturer Mallinckrodt Over 97,500% Price Increase [Slashdot]

McGruber quotes Atlanta TV station WSB: The city of Marietta, Georgia is suing drug manufacturer Mallinckrodt after Mallinckrodt increased the price of the drug Acthar by 97,500%. The lawsuit, filed in federal court, claims one city employee needs the drug Acthar, which is used to treat seizures in small children. "Acthar used to cost $40, but Mallinckrodt has raised the price of the drug to over $39,000 per vial," the city claims in the lawsuit. "This eye-popping 97,500% price increase is the result of unlawful and unfair conduct by Mallinckrondt. The city has expended over $2 million for just one patient covered by the city's self-funded health plan...." Atlanta pharmacist Ira Katz said Acthar is what's called a "biologic" and they can be classified as specialty drugs. "They put them into the specialty class, and the prices are outrageous, just outrageous," Katz said.

Read more of this story at Slashdot.

14:34

Did 'The SImpsons' Accurately Portray STEM Education and the Gig Economy? [Slashdot]

Long-time Slashdot reader theodp writes: On Sunday, The Simpsons aired The Miseducation Of Lisa Simpson, an episode in which Marge — with the help of a song from John Legend ("STEM, it's not just for dorks, dweebs and nerds / It'll turn all your dumb kids to Zuckerbergs") — convinces Springfield to use a windfall the town reaped by seizing shipwreck treasure to build the Springfield STEM Academy to "prepare kids for the jobs of tomorrow." All goes well initially — both Lisa and Bart love their new school — until Lisa realizes there's a two-tiered curriculum. While children classified as "divergent pathway assimilators" (i.e., gifted) like Lisa study neural networks and C+++ upstairs, kids like Bart are relegated to the basement where they're prepared via VR and gamified learning for a life of menial, gig economy side-hustles — charging e-scooters, shopping for rich people's produce, driving ride-share. The school's administrator was even played by Silicon Valley actor Zach Woods, who delivered one of the episode's harshest lines, notes The A.V. Club. "Staging a Norma Rae-style revolt at how the 'non-gifted' students are being trained to do everyone else's dirty work, Lisa's brought up short with a startled 'Eep' by Woods' administrator asking, 'Isn't that the point of a gifted class?'"

Read more of this story at Slashdot.

13:34

'Hutter Prize' for Lossless Compression of Human Knowledge Increased to 500,000€ [Slashdot]

Baldrson (Slashdot reader #78,598) writes: First announced on Slashdot in 2006, AI professor Marcus Hutter has gone big with his challenge to the artificial intelligence [and data compression] community. A 500,000€ purse now backs The Hutter Prize for Lossless Compression of Human Knowledge... Hutter's prize incrementally awards distillation of Wikipedia's storehouse of human knowledge to its essence. That essence is a 1-billion-character excerpt of Wikipedia called "enwik9" -- approximately the amount that a human can read in a lifetime. And 14 years ago, Baldrson wrote a Slashdot article explaining how this long-running contest has its roots in a theory which could dramatically advance the capabilities of AI: The basic theory, for which Hutter provides a proof, is that after any set of observations the optimal move by an AI is find the smallest program that predicts those observations and then assume its environment is controlled by that program. Think of it as Ockham's Razor on steroids. Writing today, Baldrson argues this could become a much more sophisticated Turing Test. Formally it is called Algorithmic Information Theory or AIT. AIT is, according to Hutter's "AIXI" theory, essential to Universal Intelligence. Hutter's judging criterion is superior to Turing tests in 3 ways: 1) It is objective2) It rewards incremental improvements3) It is founded on a mathematical theory of natural science. Detailed rules for the contest and answers to frequently asked questions are available.

Read more of this story at Slashdot.

12:34

A Ransomware Attack Shut a US Natural Gas Plant and Its Pipelines [Slashdot]

Long-time Slashdot reader Garabito writes: The Department of Homeland Security has revealed that an unnamed U.S. natural gas compression facility was forced to shut down operations for two days after becoming infected with ransomware. The plant was targeted with a phishing e-mail, that allowed the attacker to access its IT network and then pivot to its Operational Technology (OT) control network, where it compromised Windows PCs used as human machine interface, data historians and polling servers, which led the plant operator to shut it down along with other assets that depended on it, including pipelines. According to the DHS CISA report, the victim failed to implement robust segmentation between the IT and OT networks, which allowed the adversary to traverse the IT-OT boundary and disable assets on both networks.

Read more of this story at Slashdot.

11:35

Co-Creator of the First Star Trek Convention Has Died [Slashdot]

Long-time Slashdot reader sandbagger shared this report from the Hugo award-winning science fiction fanzine File 770: North Bellmore, New York fan Elyse Rosenstein, 69, died suddenly on February 20th. She had been undergoing rehabilitation after suffering a broken leg. At the time of her death, she was a retired secondary school science teacher. With Joyce Yasner, Joan Winston, Linda Deneroff and Devra Langsam, she organized the very first Star Trek convention, held in New York City in 1972. The convention was not only the very first media convention, it was also the biggest science fiction convention to date by a considerable margin... At the time, Star Trek fans were often looked down on by many science fiction fans, who were more into books and magazines than TV shows. The pair hoped that a convention specifically geared towards Star Trek would do a lot to bring fans together. The rest, as they say, is fan history.... Elyse Rosenstein had a BS in physics and math, and an MS in physics, and taught science for more than two decades. She was a member of the New York Academy of Sciences and the Long Island Physics Teachers Association... She was nicknamed "The Screaming Yellow Zonker" by Isaac Asimov.

Read more of this story at Slashdot.

11:33

Radeon Pro Software for Enterprise 20.Q1.1 for Linux Released [Phoronix]

AMD's Radeon Pro Software for Enterprise 20.Q1.1 Linux driver release was made available this week as their newest quarterly driver installment intended for use with Radeon Pro graphics hardware...

10:34

Breach of MGM Hotels' Cloud Server Exposed Data on 10.6 Million People [Slashdot]

Personal information from more than 10.6 million people was published online this week, reports ZDNet -- all from people who'd stayed at MGM Resorts hotels (which include the Bellagio, Mandalay Bay, and ARIA): Besides details for regular tourists and travelers, included in the leaked files are also personal and contact details for celebrities, tech CEOs, reporters, government officials, and employees at some of the world's largest tech companies. ZDNet verified the authenticity of the data today, together with a security researcher from Under the Breach, a soon-to-be-launched data breach monitoring service. A spokesperson for MGM Resorts confirmed the incident via email. According to our analysis, the MGM data dump that was shared today contains personal details for 10,683,188 former hotel guests. Included in the leaked files are personal details such as full names, home addresses, phone numbers, emails, and dates of birth... These users now face a higher risk of receiving spear-phishing emails, and being SIM swapped, Under the Breach told ZDNet. Twitter CEO Jack Dorsey, pop star Justin Bieber, and DHS and TSA officials are some of the big names Under the Breach spotted in the leaked files. While the data appears to be several years old, Irina Nesterovsky, Head of Research at threat intel firm KELA, tells ZDNet that the data has been shared in "hacking forums" since last July. MGM blames the breach on "unauthorized access to a cloud server" last summer -- pointing out that at least no credit card information was stolen, and that they notified all affected customers. But NBC News "spoke to a man with a Secret Service email address who was surprised to learn that he had been hacked. He said MGM never notified him about to breach." MGM told ZDNet that "we take our responsibility to protect guest data very seriously, and we have strengthened and enhanced the security of our network to prevent this from happening again."

Read more of this story at Slashdot.

09:34

How Artificial Shrimps Could Change the World [Slashdot]

Singaporean company Shiok Meats aims to grow artificial shrimp to combat the negative environmental effects associated with farmed shrimp. An anonymous reader shares an excerpt from The Economist: For a long time, beef has been a target of environmentalists because of cattle farming's contribution to global warming. But what about humble shrimp and prawns? They may seem, well, shrimpy when compared with cows, but it turns out the tasty decapods are just as big an environmental problem. The issue is not so much their life cycle: shrimp (as UN statisticians refer to all commonly eaten species collectively) do not belch planet-cooking methane the way cows do. But shrimp farms tend to occupy coastal land that used to be covered in mangroves. Draining mangrove swamps to make way for aquaculture is even more harmful to the atmosphere than felling rainforest to provide pasture for cattle. A study conducted in 2017 by CIFOR, a research institute, found that in both these instances, by far the biggest contribution to the carbon footprint of the resulting beef or shrimp came from the clearing of the land. As a result, CIFOR concluded, a kilo of farmed shrimp was responsible for almost four times the greenhouse-gas emissions of a kilo of beef. Eating a surf-and-turf dinner of prawn cocktail and steak, the study warned, can be more polluting than driving across America in a petrol-fuelled car. All this has given one Singaporean company a brain wave. "Farmed shrimps are often bred in overcrowded conditions and literally swimming in sewage water. We want to disrupt that -- to empower farmers with technology that is cleaner and more efficient," says Sandhya Sriram, one of the founders of Shiok Meats. The firm aims to grow artificial shrimp, much as some Western firms are seeking to create beef without cows. The process involves propagating shrimp cells in a nutrient-rich solution. Ms Sriram likens it to a brewery, disdaining the phrase "lab-grown." Since prawn-meat has a simpler structure than beef, it should be easier to replicate in this way. Moreover, shrimp is eaten in lots of forms and textures: whole, minced, as a paste and so on. The firm is already making shrimp mince which it has tested in Chinese dumplings. It hopes the by-product of the meat-growing can be used as a flavoring for prawn crackers and instant noodles. Eventually it plans to grow curved "whole" shrimp -- without the head and shell, that is. While producing shrimp this way currently costs $5,000 a kilo, Shiok Meats thinks it can bring the price down dramatically by using less rarefied ingredients in its growing solution.

Read more of this story at Slashdot.

08:34

'Ring' Upgrades Privacy Settings After Accusations It Shares Data With Facebook and Google [Slashdot]

Amazon's Ring doorbell cameras just added two new privacy and security features "amid rising scrutiny on the company," reports The Hill, including "a second layer of authentication by requiring users to enter a one-time code shared via email or SMS when they try to log in to see the feed from their cameras starting this week... "Until recently the company did not notify users when their accounts had been logged in to, meaning that hackers could have accessed camera feeds without owners being aware." But CBS News reports that the changes appeared "two weeks after a study showed the company shares customers' personal information with Facebook, Google and other parties without users' consent." In late January, an Electronic Frontier Foundation (EFF) study found the company regularly shares user data with Facebook, including that of Ring users who don't have accounts on the social media platform... EFF claims the company shares a lot of other user data, including people's names, email addresses, when the doorbell app was being used, the number of devices a user has, model numbers of devices, user's unique internet addresses and more. Such information could allow third parties to know when Ring users are at home or away, and potentially target them with advertising for services based on that info... The change will let Ring users block the company from sharing most, but not all, of their data. A company spokesperson said people will be able to opt out of those sharing agreements "where applicable." The spokesperson declined to clarify what "where applicable" might mean. Evan Greer, deputy director of digital rights organization Fight for the Future, shared a skeptical response with The Hill. "No amount of security updates will change the fact that these devices are enabling a nationwide, for-profit, surveillance empire. Amazon Ring is fundamentally incompatible with democracy and human rights."

Read more of this story at Slashdot.

08:04

Saturday Morning Breakfast Cereal - Freudian [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Just wait till I do them shambling on walkers in old age.


Today's News:

07:08

PipeWire 0.3 Released With Redesigned Scheduling Code To Offer JACK2-Like Performance [Phoronix]

PipeWire is the Red Hat engineered project aiming to offer better audio/video stream handling on Linux that integrates well with Flatpak and can optimally handle use-cases currently covered by the likes of PulseAudio and JACK. This week marked the release of PipeWire 0.3 as another big step forward for the effort...

06:04

A Few More Linux Kernel Patches Floated This Week For AMD Family 19h (Zen 3) [Phoronix]

Going back to the start of 2020 we've been seeing a few patches here and there around AMD Family 19h, almost certainly Zen 3. That patch work has continued with a few more bits out this week while hopefully more bring-up is on the horizon ahead of the Linux 5.7 merge window opening in just over one month's time...

06:00

After Inspecting 50 Airplanes, Boeing Found Foreign Object Debris in 35 Fuel Tanks [Slashdot]

Boeing has found debris in the fuel tanks of 35 of their 737 Max aircraft. After inspecting just 50 of the 400 planes which were awaiting delivery to customers, Boeing found debris in "about two-thirds" of them reports the Wall Street Journal, citing both federal and aviation-industry officials. "The revelation comes as the plane maker struggles to restore public and airline confidence in the grounded fleet." Materials left behind include tools, rags and boot coverings, according to industry officials familiar with the details... [T]he new problem raises fresh questions about Boeing's ability to resolve lingering lapses in quality-control practices and presents another challenge to Chief Executive David Calhoun, who took charge in January... Last year, debris was found on some 787 Dreamliners, which Boeing produces in Everett, Washington... Boeing also twice had to halt deliveries of the KC-46A military refueling tanker to the U.S. Air Force after tools and rags were found in planes after they had been delivered from its Everett factory north of Seattle. Their report include this observation from an Air Force procurement chief last summer. "It does not take a rocket scientist to deliver an airplane without trash and debris on it. It just merely requires following a set of processes, having a culture that values integrity of safety above moving the line faster for profit." But "This isn't an isolated incident either," argues long-time Slashdot reader phalse phace. "The New York Times reported about shody production and weak oversight at Boeing's North Charleston plant which makes the 787 Dreamliner back in April." A New York Times review of hundreds of pages of internal emails, corporate documents and federal records, as well as interviews with more than a dozen current and former employees, reveals a culture that often valued production speed over quality. Facing long manufacturing delays, Boeing pushed its work force to quickly turn out Dreamliners, at times ignoring issues raised by employees... Safety lapses at the North Charleston plant have drawn the scrutiny of airlines and regulators. Qatar Airways stopped accepting planes from the factory after manufacturing mishaps damaged jets and delayed deliveries. Workers have filed nearly a dozen whistle-blower claims and safety complaints with federal regulators, describing issues like defective manufacturing, debris left on planes and pressure to not report violations. Others have sued Boeing, saying they were retaliated against for flagging manufacturing mistakes.

Read more of this story at Slashdot.

05:07

NVIDIA Demonstrates Porting Of DirectX Ray-Tracing To Vulkan [Phoronix]

Big "open-source" achievements aren't too common for NVIDIA or Microsoft much less together, but thanks to their open-source work on the DXC DirectXCompiler it's possible to easily convert HLSL DXR shaders to SPIR-V for Vulkan...

04:44

Google Announces The 200 Open-Source Projects For GSoC 2020 [Phoronix]

Google's Summer of Code initiative for getting students involved with open-source development during the summer months is now into its sixteenth year. This week Google announced the 200 open-source projects participating in GSoC 2020...

04:26

AMDVLK 2020.Q1.2 Released With Vulkan 1.2 Support [Phoronix]

AMDVLK 2020.Q1.2 is out as the first official AMD open-source Vulkan Linux driver code drop in one month...

03:00

A Quarter of All Tweets About Climate Crisis Produced By Bots [Slashdot]

XXongo writes: According to a yet-to-be-published study from Brown University of the origin of 6.5 million tweets about climate and global warming, a quarter of all tweets about climate on an average day are produced by bots, disproportionately skeptical of climate science and action. The Brown University study wasn't able to identify any individuals or groups behind the battalion of Twitter bots, nor ascertain the level of influence they have had on the climate debate. "On an average day during the period studied, 25% of all tweets about the climate crisis came from bots," reports The Guardian. "This proportion was higher in certain topics -- bots were responsible for 38% of tweets about 'fake science' and 28% of all tweets about the petroleum giant Exxon. Conversely, tweets that could be categorized as online activism to support action on the climate crisis featured very few bots, at about 5% prevalence."

Read more of this story at Slashdot.

00:00

Scientists Found Breathable Oxygen In Another Galaxy For the First Time [Slashdot]

Astronomers have spotted molecular oxygen in a galaxy far far away, marking the first time that this important element has ever been detected outside of the Milky Way. Motherboard reports: This momentous "first detection of extragalactic molecular oxygen," as it is described in a recent study in The Astrophysical Journal, has big implications for understanding the crucial role of oxygen in the evolution of planets, stars, galaxies, and life. Oxygen is the third most abundant element in the universe, after hydrogen and helium, and is one of the key ingredients for life here on Earth. Molecular oxygen is the most common free form of the element and consists of two oxygen atoms with the designation O2. It is the version of the gas that we humans, among many other organisms, need to breathe in order to live. Now, a team led by Junzhi Wang, an astronomer at the Shanghai Astronomical Observatory, reports the discovery of molecular oxygen in a dazzling galaxy called Markarian 231, located 581 million light years from the Milky Way. The researchers were able to make this detection with ground-based radio observatories. "Deep observations" from the IRAM 30-meter telescope in Spain and the NOEMA interferometer in France revealed molecular oxygen emission "in an external galaxy for the first time," Wang and his co-authors wrote. Motherboard notes that you couldn't just inhale the molecular oxygen found in Markarian 231 like you would the oxygen on Earth. "This is because the oxygen is not mixed with the right abundances of nitrogen, carbon dioxide, methane, and all the other molecules that make Earth's air breathable to humans and other organisms." Still, the discovery "provides an ideal tool to study" molecular outflows from quasars and other AGNs, the team said in the study. [Markarian 231 has remained a curiosity to scientists for decades because it contains the closest known quasar, a type of hyper-energetic object. Quasars are active galactic nuclei (AGN), meaning that they inhabit the core regions of special galaxies, and they are among the most radiant and powerful objects in the universe.] "O2 may be a significant coolant for molecular gas in such regions affected by AGN-driven outflows," the researchers noted. "New astrochemical models are needed to explain the implied high molecular oxygen abundance in such regions several kiloparsecs away from the center of galaxies."

Read more of this story at Slashdot.

Friday, 21 February

20:30

JP Morgan Economists Warn of 'Catastrophic' Climate Change [Slashdot]

An anonymous reader quotes a report from the BBC: Human life "as we know it" could be threatened by climate change, economists at JP Morgan have warned. In a hard-hitting report to clients, the economists said that without action being taken there could be "catastrophic outcomes." The bank said the research came from a team that was "wholly independent from the company as a whole." Climate campaigners have previously criticized JP Morgan for its investments in fossil fuels. The firm's stark report was sent to clients and seen by BBC News. While JP Morgan economists have warned about unpredictability in climate change before, the language used in the new report was very forceful. "We cannot rule out catastrophic outcomes where human life as we know it is threatened," JP Morgan economists David Mackie and Jessica Murray said. Carbon emissions in the coming decades "will continue to affect the climate for centuries to come in a way that is likely to be irreversible," they said, adding that climate change action should be motivated "by the likelihood of extreme events." Climate change could affect economic growth, shares, health, and how long people live, they said. It could put stresses on water, cause famine, and cause people to be displaced or migrate. Climate change could also cause political stress, conflict, and it could hit biodiversity and species survival, the report warned. To mitigate climate change net carbon emissions need to be cut to zero by 2050. To do this, there needed to be a global tax on carbon, the report authors said. But they said that "this is not going to happen anytime soon."

Read more of this story at Slashdot.

19:02

Radical Hydrogen-Boron Reactor Leapfrogs Current Nuclear Fusion Tech [Slashdot]

HB11 Energy, a spin-out company originating at the University of New South Wales, claims its hydrogen-boron fusion technology is already working a billion times better than expected. Along with this announcement, the company also announced a swag of patents through Japan, China and the USA protecting its unique approach to fusion energy generation. New Atlas reports: The results of decades of research by Emeritus Professor Heinrich Hora, HB11's approach to fusion does away with rare, radioactive and difficult fuels like tritium altogether -- as well as those incredibly high temperatures. Instead, it uses plentiful hydrogen and boron B-11, employing the precise application of some very special lasers to start the fusion reaction. Here's how HB11 describes its "deceptively simple" approach: the design is "a largely empty metal sphere, where a modestly sized HB11 fuel pellet is held in the center, with apertures on different sides for the two lasers. One laser establishes the magnetic containment field for the plasma and the second laser triggers the 'avalanche' fusion chain reaction. The alpha particles generated by the reaction would create an electrical flow that can be channeled almost directly into an existing power grid with no need for a heat exchanger or steam turbine generator." HB11's Managing Director Dr. Warren McKenzie clarifies over the phone: "A lot of fusion experiments are using the lasers to heat things up to crazy temperatures -- we're not. We're using the laser to massively accelerate the hydrogen through the boron sample using non-linear forced. You could say we're using the hydrogen as a dart, and hoping to hit a boron , and if we hit one, we can start a fusion reaction. That's the essence of it. If you've got a scientific appreciation of temperature, it's essentially the speed of atoms moving around. Creating fusion using temperature is essentially randomly moving atoms around, and hoping they'll hit one another, our approach is much more precise." He continues: "The hydrogen/boron fusion creates a couple of helium atoms. They're naked heliums, they don't have electrons, so they have a positive charge. We just have to collect that charge. Essentially, the lack of electrons is a product of the reaction and it directly creates the current." The lasers themselves rely upon cutting-edge "Chirped Pulse Amplification" technology, the development of which won its inventors the 2018 Nobel prize in Physics. Much smaller and simpler than any of the high-temperature fusion generators, HB11 says its generators would be compact, clean and safe enough to build in urban environments. There's no nuclear waste involved, no superheated steam, and no chance of a meltdown. "This is brand new," Professor Hora tells us. "10-petawatt power laser pulses. It's been shown that you can create fusion conditions without hundreds of millions of degrees. This is completely new knowledge. I've been working on how to accomplish this for more than 40 years. It's a unique result. Now we have to convince the fusion people -- it works better than the present day hundred million degree thermal equilibrium generators. We have something new at hand to make a drastic change in the whole situation. A substitute for carbon as our energy source. A radical new situation and a new hope for energy and the climate."

Read more of this story at Slashdot.

18:25

Scientists Condemn Conspiracy Theories About Origin of Coronavirus Outbreak [Slashdot]

hackingbear writes: A group of 27 prominent public health scientists from outside China, who have studied SARS-CoV-2 and "overwhelmingly conclude that this coronavirus originated in wildlife" just like many other viruses that have recently emerged in humans, is pushing back against a steady stream of stories and even a scientific paper suggesting a laboratory in Wuhan, China, may be the origin of the outbreak of COVID-19. "The rapid, open, and transparent sharing of data on this outbreak is now being threatened by rumors and misinformation around its origins," the scientists, from nine countries, write in a statement published online by The Lancet . Many posts on social media have singled out the Wuhan Institute of Virology for intense scrutiny because it has a laboratory at the highest security level -- biosafety level 4 -- and its researchers study coronaviruses from bats; speculations have included the possibility that the virus was bioengineered in the lab or that a lab worker was infected while handling a bat. Researchers from the institute have insisted there is no link between the outbreak and their laboratory. Peter Daszak, president of the EcoHealth Alliance and a cosignatory of the statement, has collaborated with researchers at the Wuhan institute who study bat coronaviruses. "We're in the midst of the social media misinformation age, and these rumors and conspiracy theories have real consequences, including threats of violence that have occurred to our colleagues in China."

Read more of this story at Slashdot.

18:17

Are we having fund yet, npm? CTO calls for patience after devs complain promised donations platform has stalled [The Register]

Funding free software is 'still a very unsolved problem' says co-founder

At the end of August, JavaScript package registry NPM Inc said it intended "to finalize and launch an Open Source funding platform by the end of 2019."…

15:35

Steam Play's Proton 5.0-3 Released With Support For Metro Exodus Direct3D 12 Mode [Phoronix]

CodeWeavers working under contract for Valve on their Wine downstream Proton is out with a new update to their Proton 5.0 series...

14:45

Third time's a charm, maybe: Bankers suing Oracle over claims of exaggerated cloud sales have another go at convincing skeptical judge [The Register]

Anecdotes of bullying customers fall short of legal standard to establish fraudulent intent

The financial group suing Oracle for allegedly deceiving investors by inflating its cloud revenue this week took a third stab at articulating its claim against the database giant.…

13:36

Duped into running bogus virus scans at Office Depot? Dry your eyes with a small check from $35m settlement [The Register]

Treat yourself to a meal out or a case of bevvies... or an appetizer in SF or NYC

Victims of dodgy IT support from Office Depot will start receiving compensation checks, a US consumer watchdog said Thursday.…

12:18

This is your last chance, HP. There's no turning back. You take blue poison pill, the story ends. You take the red Xerox pill, you stay in Wonderland [The Register]

Photocopier goliath hits back at PC giant's attempt to scupper takeover

Xerox has shot back at HP's decision to adopt a shareholder rights plan – a poison pill designed to derail the photocopier titan's $36.5bn hostile takeover of the PC'n'printer slinger.…

11:40

Benchmarking OpenMandriva's AMD Ryzen Optimized Linux Distribution On The Threadripper 3970X [Phoronix]

While Clear Linux is well known as being the performance-optimized Linux distribution out of Intel and catered towards performing the best on their hardware (though as we continue to show, Clear Linux does also perform incredibly well on AMD hardware too and generally faster than other distributions), when it comes to AMD-optimized distributions the primary example remains OpenMandriva. Since 2018 OpenMandriva has been providing an AMD Zen optimized build where their operating system and entire package archive is built with the "znver1" compiler optimizations. As it's been almost a year since last looking at OpenMandriva's Zen optimized build, here are some fresh benchmarks using the newly-released OpenMandriva 4.1.

11:38

Saturday Morning Breakfast Cereal - The Denial of Butts [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Humans are always complaining about being a mind attached to a body, but the body was here first.


Today's News:

I little nerdie told me BAHFest tickets are selling out fast! Get'em while they're extant, geeks of Houston and London!

10:14

Using your devices as the key to your apps [The Cloudflare Blog]

Using your devices as the key to your apps

I keep a very detailed budget. I have for the last 7 years. I manually input every expense into a spreadsheet app and use a combination of sumifs functions to track spending.

Opening the spreadsheet app, and then the specific spreadsheet, every time that I want to submit an expense is a little clunky. I'm working on a new project to make that easier. I'm building a simple web app, with a very basic form, into which I will enter one-off expenses. This form will then append those expenses as rows into the budget workbook.

I want to lock down this project; I prefer that I am the only person with the power to wreck my budget. To do that, I'm going to use Cloudflare Access. With Access, I can require a login to reach the page - no server-side changes required.

Except, I don't want to allow logins from any device. For this project, I want to turn my iPhone into the only device that can reach this app.

To do that, I'll use Cloudflare Access in combination with an open source toolkit from Cloudflare, cfssl. Together, I can convert my device into a secure key for this application in about 45 minutes.

While this is just one phone and a simple project, a larger organization could scale this up to hundreds of thousands or millions - without spending 45 minutes per device. Authentication occurs in the Cloudflare network and lets teams focus on securely deploying devices, from IoT sensors to corporate laptops, that solve new problems.


🎯 I have a few goals for this project:

  • Protect my prototype budget-entry app with authentication
  • Avoid building a custom login flow into the app itself
  • Use mutual TLS (mTLS) authentication so that only requests from my iPhone are allowed

🗺️ This walkthrough covers how to:

  • Build an Access policy to enforce mutual TLS authentication
  • Use Cloudflare's PKI toolkit to create a Root CA and then generate a client certificate
  • Use OpenSSL to convert that client certificate into a format for iPhone usage
  • Place that client certificate on my iPhone

⏲️Time to complete: ~45 minutes


Cloudflare Access

Cloudflare Access is a bouncer that checks ID at the door. Any and every door.

Old models of security built on private networks operate like a guard at the front door of a large apartment building, except this apartment building does not have locks on any of the individual units. If you can walk through the front door, you could walk into any home. By default, private networks assume that a user on that network is trusted until proven malicious - you're free to roam the building until someone reports you. None of us want to live in that complex.

Access replaces that model with a bouncer in front of each apartment unit. Cloudflare checks every attempt to reach a protected app, machine, or remote desktop against rules that define who is allowed in.

To perform that check, Access needs to confirm a user's identity. To do that, teams can integrate Access with identity providers like G Suite, AzureAD, Okta or even Facebook and GitHub.

Using your devices as the key to your apps

For this project, I want to limit not just who can reach the app, but also what can reach it. I want to only allow my particular iPhone to connect. Since my iPhone does not have its own GitHub account, I need to use a workflow that allows devices to authenticate: certificates, specifically mutual TLS (mTLS) certificate authentication.

📃 Please reach out. Today, the mTLS feature in Access is only available to Enterprise plans. Are you on a self-serve plan and working on a project where you want to use mTLS? IoT, service-to-service, corporate security included. If so, please reach out to me at srhea@cloudflare.com and let's chat.

mTLS and cfssl

Public key infrastructure (PKI) makes it possible for your browser to trust that this blog really is blog.cloudflare.com. When you visit this blog, the site presents a certificate to tell your browser that it is the real blog.cloudflare.com.

Your browser is skeptical. It keeps a short list of root certificates that it will trust. Your browser will only trust certificates signed by authorities in that list. Cloudflare offers free certificates for hostnames using its reverse proxy. You can also get origin certificates from other services like Let's Encrypt. Either way, when you visit a web page with a certificate, you can ensure you are on the authentic site and that the traffic between you and the blog is encrypted.

For this project, I want to go the other direction. I want my device to present a certificate to Cloudflare Access demonstrating that it is my authentic iPhone. To do that, I need to create a chain that can issue a certificate to my device.

Cloudflare publishes an open source PKI toolkit, cfssl, which can solve that problem for me. cfssl lets me quickly create a Root CA and then use that root to generate a client certificate, which will ultimately live on my phone.

To begin, I'll follow the instructions here to set up cfssl on my laptop. Once installed, I can start creating certificates.

Generating a Root CA and an allegory about Texas

First, I need to create the Root CA. This root will give Access a basis for trusting client certificates. Think of the root as the Department of Motor Vehicles (DMV) in Texas. Only the State of Texas, through the DMV, can issue Texas driver licenses. Bouncers do not need to know about every driver license issued, but they do know to trust the State of Texas and how to validate Texas-issued licenses.

In this case, Access does not need to know about every client cert issued by this Root CA. The product only needs to know to trust this Root CA and how to validate if client certificates were issued by this root.

I'm going to start by creating a new directory, cert-auth to keep things organized. Inside of that directory, I'll create a folder, root, where I'll store the Root CA materials

Next, I'll define some details about the Root CA. I'll create a file, ca-csr.json and give it some specifics that relate to my deployment.

{
    "CN": "Sam Money App",
    "key": {
      "algo": "rsa",
      "size": 4096
    },
    "names": [
      {
        "C": "PT",
        "L": "Lisboa",
        "O": "Money App Test",
        "OU": "Sam Projects",
        "ST": "Lisboa"
      }
    ]
  }

Now I need to configure how the CA will be used. I'll create another new file, ca-config.json, and add the following details.

{
    "signing": {
      "default": {
        "expiry": "8760h"
      },
      "profiles": {
        "server": {
          "usages": ["signing", "key encipherment", "server auth"],
          "expiry": "8760h"
        },
        "client": {
          "usages": ["signing","key encipherment","client auth"],
          "expiry": "8760h"
        }
      }
    }
  }

The ca-csr.json file gives the Root CA a sense of identity and the ca-config.json will later define the configuration details when signing new client certificates.

With that in place, I can go ahead and create the Root CA. I'll run the following command in my terminal from within the root folder.

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca

The “Root CA” here is really a composition of three files, all of which are created by that command. cfssl generates a private key, a certificate signing request, and the certificate itself. The output should resemble this screenshot:

Using your devices as the key to your apps

I need to guard the private key like it's the only thing that matters. In real production deployments, most organizations will create an intermediate certificate and sign client certificates with that intermediate. This allows administrators to keep the root locked down even further, they only need to handle it when creating new intermediates (and those intermediates can be quickly revoked). For this test, I'm just going to use a root to create the client certificates.

Now that I have the Root CA, I can upload the certificate in PEM format to Cloudflare Access. Cloudflare can then use that certificate to authenticate incoming requests for a valid client certificate.

In the Cloudflare Access dashboard, I'll use the card titled “Mutual TLS Root Certificates”. I can click “Add A New Certificate” and then paste the content of the ca.pem file directly into it.

Using your devices as the key to your apps

I need to associate this certificate with a fully qualified domain name (FQDN). In this case, I'm going to use the certificate to authenticate requests for money.samrhea.com, so I'll just input that subdomain, but I could associate this cert with multiple FQDNs if needed.

Once saved, the Access dashboard will list the new Root CA.

Using your devices as the key to your apps

Building an Access Policy

Before I deploy the budget app prototype to money.samrhea.com, I need to lock down that subdomain with an Access policy.

In the Cloudflare dashboard, I'll select the zone samrhea.com and navigate to the Access tab. Once there, I can click Create Access Policy in the Access Policies card. That card will launch an editor where I can build out the rule(s) for reaching this subdomain.

Using your devices as the key to your apps

In the example above, the policy will be applied to just the subdomain money.samrhea.com. I could make it more granular with path-based rules, but I'll keep it simple for now.

In the Policies section, I'm going to create a rule to allow client certificates signed by the Root CA I generated to reach the application. In this case, I'll pick “Non Identity” from the Decision drop-down. I'll then choose “Valid Certificate” under the Include details.

This will allow any valid certificate signed by the “Money App Test” CA I uploaded earlier. I could also build a rule using Common Names, but I'll stick with valid cert for now. I'll hit Save and finish the certificate deployment.

Issuing client certs and converting to PKCS #12

So far, I have a Root CA and an Access policy that enforces mTLS with client certs issued by that Root CA. I've stationed a bouncer outside of my app and told them to only trust ID cards issued by The State of Texas. Now I need to issue a license in the form of a client certificate.

To avoid confusion, I'm going to create a new folder in the same directory as the root folder, this one called client. Inside of this directory, I'll create a new file: client-csr.json with the following .json blob:

{
    "CN": "Rhea Group",
    "hosts": [""],
    "key": {
      "algo": "rsa",
      "size": 4096
    },
    "names": [
      {
        "C": "PT",
        "L": "Lisboa",
        "O": "Money App Test",
        "OU": "Sam Projects",
        "ST": "Lisboa"
      }
    ]
  }

This sets configuration details for the client certificate that I'm about to request.

I can now use cfssl to generate a client certificate against my Root CA. The command below uses the -profile flag to create the client cert using the JSON configuration I just saved. This also gives the file the name iphone-client.

$ cfssl gencert -ca=../root/ca.pem -ca-key=../root/ca-key.pem -config=../root/ca-config.json -profile=client client-csr.json | cfssljson -bare iphone-client

The combined output should resemble the following:

Using your devices as the key to your apps

FileDescriptionclient-csr.jsonThe JSON configuration created earlier to specify client cert details.iphone-client-key.pemThe private key for the client certificate generated.iphone-client.csrThe certificate signing request used to request the client cert.iphone-client.pemThe client certificate created.

With my freshly minted client certificate and key, I can go ahead and test that it works with my Access policy with a quick cURL command.

$ curl -v --cert iphone-client.pem --key iphone-client-key.pem https://money.samrhea.com

That works, but I'm not done yet. I need to get this client certificate on my iPhone. To do so, I need to convert the certificate and key into a format that my iPhone understands, PKCS #12.

PKCS 12 is a file format used for storing cryptographic objects. To convert the two .pem files, the certificate and the key, into PKCS 12, I'm going to use the OpenSSL command-line tool.

OpenSSL is a popular toolkit for TLS and SSL protocols that can solve a wide variety of certificate use cases. In my example, I just need it for one command:

$ openssl pkcs12 -export -out sam-iphone.p12 -inkey iphone-client-key.pem -in iphone-client.pem -certfile ../root/ca.pem

The command above takes the key and certificate generated previously and converts them into a single .p12 file. I'll also be prompted to create an “Export Password”. I'll use something that I can remember, because I'm going to need it in the next section.

Using your devices as the key to your apps

Authenticating from my iPhone

I now need to get the .p12 file on my iPhone. In corporate environments, organizations distribute client certificates via mobile device management (MDM) programs or other tools. I'm just doing this for a personal test project, so I'm going to use AirDrop.

Using your devices as the key to your apps

Once my iPhone receives the file, I'll be prompted to select a device where the certificate will be installed as a device profile.

Using your devices as the key to your apps

I'll then be prompted to enter my device password and the password set in the “Export” step above. Once complete, I can view the certificate under Profiles in Settings.

Using your devices as the key to your apps

Now, when I visit money.samrhea.com for the first time from my phone, I'll be prompted to use the profile created.

Using your devices as the key to your apps

Browsers can exhibit strange behavior when handling client certificate prompts. This should be the only time I need to confirm this profile should be used, but it might happen again.

What's next?

My prototype personal finance app now is only accessible from my iPhone. This also makes it easy to login through Access from my device.

Access policies can be pretty flexible. If I want to reach it from a different device, I could build a rule to allow logins through Google as an alternative. I can also create a policy to require both a certificate and SSO login.

Beyond just authentication, I can also build something with this client cert flow now. Cloudflare Access makes the details from the client cert, the ones I created earlier in this tutorial, available to Cloudflare Workers. I can start to create routing rules or trigger actions based on the details about this client cert.

09:20

Why so shy, Samsung? Weird Find my Phone push notification did not only affect Galaxy mobes [The Register]

Register readers around the globe shared in worldwide oddity

Concern is growing over the security of Samsung's Android infrastructure after readers from around the world told The Register that yesterday's Find my Mobile push notification affected them – including on devices where the offending app was disabled.…

08:49

Mesa's RADV Vulkan Driver Adding Compatibility For Use With The AMD Radeon GPU Profiler [Phoronix]

To date the Mesa "RADV" Radeon Vulkan driver hasn't supported AMD's GPUOpen Radeon GPU Profiler but that is changing...

08:40

Managed services slinger Ensono waves goodbye to staff on both sides of the pond [The Register]

Says a big hello to low-cost services land, aka India

Managed services pusher Ensono is to chop 137 employees across its UK and US global support desk and technology teams to reduce costs, and has said that hiring in India is a key element of delivering services.…

08:00

Ofcom measured UK's 5G radiation and found that, no, it won't give you cancer [The Register]

Dangerous levels of EMF: Evidence-based Measurement Findings

UK comms regulator Ofcom today published the results of its latest spectrum measurement tests, which tracked electromagnetic field emissions at 16 of the busiest 5G sites.…

07:20

Californian man served with restraining order for allegedly 'stalking' Apple CEO Tim Cook [The Register]

Also slapped with court request not to contact security staffers

Members of Silicon Valley-based security firm Urban Tactical Group (UTG), which does "regular" work for Apple, have been granted a temporary restraining order preventing a Californian man from approaching them.…

06:50

Get in the C: Raspberry Pi 4 can handle a wider range of USB adapters thanks to revised design's silent arrival [The Register]

Resistance no longer futile?

There is good news for prospective buyers of the diminutive Raspberry Pi 4 as the USB-C issue that stopped the device working with some power supplies has been fixed.…

06:20

Come on baby light me on fire: McDonald's to sell 'Quarter Pounder' scented candles [The Register]

Because the human condition isn't harrowing enough

Fear, shame, regret and Quarter Pounder® with Cheese – now you can relive the scents of last night in your living room thanks to obesity merchants McDonald's.…

05:50

Windows Dressing: Psst... Fast Ring folks, whispers Microsoft. You're in this for the cool icons, right? [The Register]

Fluent, fluent everywhere but not a patch that works

Good news everyone! While Microsoft seems unable to deliver a patch that won't leave Windows 10 in a parlous state for some users, it does possess the will to fiddle with the icons. Again.…

05:25

Intel Compute Runtime Adds OCLOC Multi-Device Compilation [Phoronix]

Version 20.07.15711 of the Intel Compute Runtime was released this morning...

05:16

Breaking bad... browser use: New Mexico accuses Google of illegally slurping kids' private data via G Suite [The Register]

Web giant hits back, says allegations are 'factually wrong'

New Mexico has sued Google, claiming the ad-slinging web titan broke its promises – and the law – by covertly collecting personal information and the browsing habits of children.…

05:01

Intel Ethernet E823 Support Coming To The ICE Driver In Linux 5.7 [Phoronix]

Intel's ICE driver for the Ethernet E800 series is seeing a new member of the family come Linux 5.7...

04:43

Linux EFI Going Through Spring Cleaning Before RISC-V Support Lands [Phoronix]

The Linux EFI boot code is going through some "spring cleaning" ahead of the RISC-V EFI support landing that still could make it for the Linux 5.7 kernel cycle this spring...

04:29

Decent, legal, honest and searchable: C'mon, Ofcom. Let us check up on the ad-slingers ourselves [The Register]

It's a hard job... why not outsource it?

Column  Our favourite controller of UK media, Ofcom, is being given new powers to regulate the internet. Or censor it, depending on your preferred spin. It's all a bit fuzzy at the moment: with illegal content, the regulator will watch for the usual monsters of terrorism and child abuse and act swiftly to close them down and keep them down.…

04:25

Linux 5.7 DRM Bringing New "TIDSS" Driver [Phoronix]

The first batch of DRM-Misc changes following the recent Linux 5.6 merge window have been merged into DRM-Next in forming the early material that will ultimately come to the Linux 5.7 cycle in April...

03:45

AMD takes a bite out of Intel's PC market share across Europe amid microprocessor shortages, rising Ryzen [The Register]

Mmmm, these scraps are pretty darn meaty

Intel is losing ground to AMD in every corner of the European PC industry serviced by the channel, according to official sales stats from distributors.…

03:00

'Don't tell anyone but I have a secret.' There, that's my security sorted [The Register]

The inevitable return of Norbert Spankmonkey

Something for the Weekend, Sir?  Where's my free promo tat? Fellow convention attendees have no such problem being showered with promotional gifts from all sides as they totter up and down the rows of booths.…

02:15

Your McDonald's demo has expired. For full functionality, please purchase a licence or try another fast-food joint [The Register]

I'll take a Big Mac, large fries and... um, are you OK?

Bork!Bork!Bork!  There is a saying about networking fails: "It's not DNS. It can't be DNS. It was DNS." So far for The Register's column of retail calamity, it's McDonald's. It's nearly always McDonald's.…

01:15

The self-disconnecting switch: Ghost in the machine or just a desire to save some cash? [The Register]

Yet another reason to never do things by halves

On Call  The weekend is a day away, but before you swan off, please join us for another episode of ticketing system terror with The Register's regular On Call.…

00:36

If you're struggling to keep new year resolutions, try NGTS-10b, a mere 1,000 LY away. One year is just 18 hrs [The Register]

Happy birthday to me... Happy birthday to me... Happy birthday to me... Happy birthday to me... Happy birthday to me... Happy birthday to me...

Astronomers have discovered a hot Jupiter-like exoplanet with the shortest orbital period yet: a year on this large puffy world lasts just 18 hours.…

00:00

Keep cloud innovation rolling at your biz by getting yourself to Gartner’s Infrastructure and Operations Conference [The Register]

Discover which developments lie ahead: 16 - 17 June, Frankfurt

Promo  “Digital transformation” in practice still basically boils down to hybrid cloud, and while more and more of us are bolting public and private cloud infrastructure together, it’s no less important to keep looking for new inspiration as we put new technology and skills in place within the enterprise.…

00:00

Demonstrating PERL with Tic-Tac-Toe, Part 1 [Fedora Magazine]

Larry Wall’s Practical Extraction and Reporting Language (PERL) was originally developed in 1987 as a general-purpose Unix scripting language that borrowed features from C, sh, awk, sed, BASIC, and LISP. In the late 1990s, before PHP became more popular, PERL was commonly used for CGI scripting. PERL is still the go-to tool for many sysadmins who need something more powerful than sed or awk when writing complex parsing and automation scripts. It has a somewhat high learning curve due to its dense notation. But a recent survey indicates that PERL developers earn 54 per cent more than the average developer. So it may still be a worthwhile language to learn.

PERL is far too complex to cover in any significant detail in this magazine. But this short series of articles will attempt to demonstrate a few of the most basic features of the language so that you can get a sense of what the language is like and the kind of things it can do.

An example PERL program

PERL was originally a language optimized for scanning arbitrary text files, extracting information from those text files, and printing reports based on that information. To demonstrate how this core feature of PERL works, a very simple Tic-Tac-Toe game is provided below. The below program scans a textual representation of a Tic-Tac-Toe board, extracts and manipulates the numbers on the board, and prints the modified result to the console.

00 #!/usr/bin/perl
01 
02 use feature 'state';
03 
04 use constant MARKS=>[ 'X', 'O' ];
05 use constant BOARD=>'
06 ┌───┬───┬───┐
07 │ 1 │ 2 │ 3 │
08 ├───┼───┼───┤
09 │ 4 │ 5 │ 6 │
10 ├───┼───┼───┤
11 │ 7 │ 8 │ 9 │
12 └───┴───┴───┘
13 ';
14 
15 sub get_mark {
16    my $game = shift;
17    my @nums = $game =~ /[1-9]/g;
18    my $indx = (@nums+1) % 2;
19 
20    return MARKS->[$indx];
21 }
22 
23 sub put_mark {
24    my $game = shift;
25    my $mark = shift;
26    my $move = shift;
27 
28    $game =~ s/$move/$mark/;
29 
30    return $game;
31 }
32 
33 sub get_move {
34    return (<> =~ /^[1-9]$/) ? $& : '0';
35 }
36 
37 PROMPT: {
38    state $game = BOARD;
39 
40    my $mark;
41    my $move;
42 
43    print $game;
44 
45    last PROMPT if ($game !~ /[1-9]/);
46 
47    $mark = get_mark $game;
48    print "$mark\'s move?: ";
49 
50    $move = get_move;
51    $game = put_mark $game, $mark, $move;
52 
53    redo PROMPT;
54 }

To try out the above program on your PC, you can copy-and-paste the above text into a plain text file and save and run it. The line numbers will have to be removed before the program will work. Of course, the command that one uses to perform that sort of textual extraction and reporting is perl.

Assuming that you have saved the above text to a file named game.txt, the following command can be used to strip the leading numbers from all the lines and write the modified version to a new file named game:

$ cat game.txt | perl -npe 's/...//' > game

The above command is a very small PERL script and it is an example of what is called a one-liner.

Now that the line numbers have been removed, the program can be run by entering the following command:

$ perl game

How it works

PERL is a procedural programming language. A program written in PERL consists of a series of commands that are executed sequentially. With few exceptions, most commands alter the state of the computer’s memory in some way.

Line 00 in the Tic-Tac-Toe program isn’t technically part of the PERL program and it can be omitted. It is called a shebang (the letter e is pronounced soft as it is in the word shell). The purpose of the shebang line is to tell the operating system what interpreter the remaining text should be processed with if one isn’t specified on the command line.

Line 02 isn’t strictly necessary for this program either. It makes available an advanced command named state. The state command creates a variable that can retain its value after it has gone out of scope. I’m using it here as a way to avoid declaring a global variable. It is considered good practice in computer programming to avoid using global variables where possible because they allow for action at a distance. If you didn’t follow all of that, don’t worry about it. It’s not important at this point.

PERL scopes, blocks and subroutines

Scope is a very important concept that one needs to be familiar with when reading and writing procedural programs. In PERL, scope is often delineated by a pair of curly brackets. Within the global scope, the above Tic-Tac-Toe program defines four sub-scopes on lines 15-21, 23-31, 33-35 and 37-54. The first three scopes are prefixed with subroutine declarations and the last scope is prefixed with the label PROMPT.

Scopes serve multiple purposes in programming languages. One purpose of a scope is to group a set of commands together as a unit so that they can be called repeatedly with a single command rather than having to repeat several lines of code each time in the program. Another purpose is to enhance the readability of the program by denoting a restricted area where the value of a variable can be updated.

Within the scope that is labeled PROMPT and defined on lines 37-54 of the above Tic-Tac-Toe program, a variable named mark is created using the my keyword (line 40). After it is created, it is assigned a value by calling the get_mark subroutine (line 47). Later, the put_mark subroutine is called (line 51) to change the value in the square that was chosen by the get_move subroutine on line 50.

Hopefully it is obvious that the mark that put_mark is setting is meant to be the same mark that get_mark retrieved earlier. As a programmer though, how do I know that the value of mark wasn’t changed when the get_move subroutine was called? This example program is small enough that every line can be examined to make that determination. But most programs are much larger than this example and having to know exactly what is going on at all points in the program’s execution can be overwhelming and error-prone. Because mark was created with the my keyword, its value can only be accessed and modified within the scope that it was created (or a sub-scope). It doesn’t matter what subroutines at parallel or higher scopes do; even if they change variables with the same name in their own scopes. This property of scopes — restricting the range of lines on which the value of a variable can be updated — improves the readability of the code by allowing the programmer to focus on a smaller section of the program without having to be concerned about what is happening elsewhere in the program.

Lines 04 and 05 define the MARKS and BOARD variables, respectively. Because they are not within any curly bracket pairing, they exist in the global scope. It is permissible to create constant variables in the global scope because they are read-only and therefore not subject to the action at a distance concern. In PERL, it is traditional to name constants in all upper case letters.

Notice that scopes can be nested such that variables defined in outer scopes can be accessed and modified from within inner scopes. This is why the MARKS and BOARD variables can be accessed within the get_mark subroutine and PROMPT block respectively — they are sub-scopes of the global scope.

The statements in the program are executed in order from top to bottom and left to right. Each statement is terminated with a semi-colon (;). The semi-colon can be omitted from the last statement in any scope and from after the last block of many statements that define the flow of the program such as sub, if and while.

In PERL nomenclature, scopes are called blocks. Scope is the more general term that is typically used in online references like Wikipedia, but the remainder of this article will use the more perlish term blocks.

The statements within the first three blocks are not immediately executed as the program is evaluated from top to bottom. Rather, they are associated with the subroutine name preceding the block. This is the function of the sub keyword — it associates a subroutine name with a block of statements so that they can be called as a unit elsewhere in the program. The three subroutines get_mark (lines 15-21), put_mark (lines 23-31), and get_move (lines 33-35) are called on lines 47, 51 and 50 respectively.

The PROMPT block is not associated with a subroutine definition or other flow-control statement, so the statements within it are immediately executed in sequence when the program is run.

PERL regular expressions

If there is one feature that is more central to PERL than any other it is regular expressions. Notice that in the example Tic-Tac-Toe program every block contains a =~ (or !~) operator followed by some text surrounded with forward slashes (/). The text within the forward slashes is called a regular expression and the operator binds the regular expression to a variable or data stream.

It is important to note that there are different regular expression syntaxes. Some editors and command-line tools (for example, grep) allow the user to select which regular expression syntax they prefer to use. PERL-Compatible Regular Expressions (PCRE) are by far the most powerful.

Regular expressions used in matching operations

The result of applying the regular expression to a variable or data stream is usually a value that, when used in a flow-control statement such as if or while, will evaluate to true or false depending on whether or not the match succeeded. There are modifiers that can be appended to the closing slash of a regular expression to change its return value.

Line 45 of the Tic-Tac-Toe program provides a typical example of how a regular expression is used in a PERL program. The regular expression [1-9] is being applied to the variable game which holds the in-memory representation of the Tic-Tac-Toe game board. The expression is a character class that matches any character in the range from 1 to 9 (inclusive). The result of the regular expression will be true only if a character from 1 to 9 is present in what is being evaluated. On line 45, the !~ operator applies the regular expression to the game variable and negates its sense such that the result will be true only if none of the characters from 1 to 9 are present. Because the regular expression is embedded within the conditional clause of the if statement modifier, the statement last PROMPT is only executed if there are no characters in the range from 1 to 9 left on the game board.

The last statement is one of a few flow-control statements in PERL that allow the program execution sequence to jump from the current line to another line somewhere else in the program. Other flow-control statements that work in a similar fashion include next, continue, redo and goto (the goto statement should be avoided whenever possible because it allows for spaghetti code).

In the example Tic-Tac-Toe program, the last PROMPT statement on line 45 causes program execution to resume just after the PROMPT block. Because there are no more statements in the program, the program will terminate.

The label PROMPT was chosen arbitrarily. Any label (or none at all) could have been used.

The redo PROMPT statement at the end of the PROMPT block causes program execution to jump back to the beginning of the PROMPT block.

Notice that the state keyword like the my keyword creates a variable that can only be accessed or modified within the block that it is created (or a nested sub-block if any exist). Unlike the my keyword, variables created with the state keyword keep their former value when the blocks they are in are called repeatedly. This is the behavior that is needed for the game variable because it is being updated incrementally each time the PROMPT block is run. The mark and move variables are meant to be different on each iteration of the PROMPT block, so they do not need to be created with the state keyword.

Regular expressions used for input validation

Another common use of regular expressions is for input validation. Line 34 of the example Tic-Tac-Toe program provides an example of a regular expression being used for input validation. The expression on line 34 is similar to the one on line 45. It is also checking for characters from 1 to 9. However, it is performing the check against the null filehandle (<>); it is using the =~ operator; and it is prefixed and suffixed with the zero-width assertions ^ and $ respectively.

The null filehandle, when accessed as it is on line 34, will cause the program to pause until one line of input is provided. The regular expression will evaluate to true only if the line contains one character in the range from 1 to 9. The assertions ^ and $ do not match any characters. Rather, they match the beginning and end positions, respectively, on the line. The regular expression effectively reads: “Begin (^) with one character in the range from 1 to 9 ([1-9]) and end ($)”.

Because it is embedded in the conditional clause of the ternary operator, line 34 will return either what was matched ($&) if the match succeeded or the character zero (0) if it failed. If the input were not validated in this way, then the user could submit their opponent’s mark rather than a number on the board.

Regular expressions used for filtering data

Line 17 demonstrates using the global modifier (g) on a regular expression. With the global modifier, the regular expression will return the number of matches instead of true or false. In list context, it returns a list of all the matched substrings.

Line 17 uses a regular expression to copy all the numbers in the range from 1 to 9 from the game variable into the array named nums. Line 18 then uses the modulo operator with the integer 2 as its second argument to determine whether the length of the nums array is even or odd. The formula on line 18 will result in 0 if the length of nums is odd and 1 if the length of nums is even. Finally, the computed index (indx) is used to access an element of the MARKS array and return it. Using this formula, the get_mark function will alternately return X or O depending on whether there are an odd or even number of positions left on the board.

Regular expressions used for substituting data

Line 28 demonstrates yet another common use of regular expressions in PERL. Rather than being used in a match operator (m), the regular expression on line 28 is being used in a substitution operator (s). If the value in the move variable is found in the game variable, it will be substituted with the value of the mark variable.

PERL sigils and data types

The last things of note that are used in the example Tic-Tac-Toe program are the sigils ($ and @) that are placed before the variable names. When creating a variable, the sigil indicates the type of variable being created. It is important to note that a different sigil can be prefixed to the variable name when it is accessed to indicate whether one or many items should be returned from the variable.

There are three built-in data types in PERL: scalars ($), arrays (@) and associative arrays (%). Scalars hold a single data item such as a number, character or string of characters. Arrays are numerically indexed sets of scalars. Associative arrays are arrays that are indexed by scalars rather than numbers.

Thursday, 20 February

23:03

Worried about future planet-cleansing superbugs? But distrust AI? Guess you're not interested in these antibiotics [The Register]

Meet halicin, picked by a neural network and whimsically named after the HAL 9000 bot

Although new strains of antibiotics are increasingly difficult to develop, scientists have done just that, with the help of a neural network.…

22:01

Broadcom Bringing Up Linux Support For VK Accelerators [Phoronix]

Broadcom developers have been recently volleying open-source Linux driver patches for enabling their "VK Accelerators" on the platform...

20:04

FCC forced by court to ask the public (again) if they think tearing up net neutrality was a really good idea or not [The Register]

US regulator tries to hide embarrassment behind series of sudden announcements

Comment  The Federal Communications Commission (FCC) is asking the American public to tell it if its decision in 2017 to scrap net neutrality regulations was dumb or not.…

19:11

Google product boss cuffed on suspicion of murder after his Microsoft manager wife goes missing, woman's body found, during Hawaii trip [The Register]

Before he was arrested, Googler appealed to internet, newspaper for help finding his spouse

Updated  Sonam Saxena, a product manager at Google Cloud, was arrested in Hawaii this week on suspicion of second-degree murder.…

17:36

Google exiles 600 apps from Play Store for 'disruptive advertising' amid push to clean up Android souk's image [The Register]

Purge is the latest in a series of similar store scourings

On Thursday Google confirmed it has removed nearly 600 Android apps from the Google Play Store and banned them from its ad services for violating its policies on disruptive advertising and interstitials.…

17:23

RADV Vulkan Driver Adds Option For Zeroing Out Video Memory [Phoronix]

New to Mesa 20.1-devel is a new option for the Radeon Vulkan "RADV" driver to enable zeroing out video memory allocations...

16:20

Apple drops a bomb on long-life HTTPS certificates: Safari to snub new security certs valid for more than 13 months [The Register]

Keep your crypto below 398 days after September 1 and you're all good

Safari will, later this year, no longer accept new HTTPS certificates that expire more than 13 months from their creation date.…

15:56

Stuffing nonsense: Persistent cyberpunks are pummelling banks' public APIs, warns Akamai [The Register]

Security biz clocked 55 million malicious login attempts on a client

Financial services firms' public APIs are becoming the target du jour for internet ne'er-do-wells, reckons Akamai, which also said that one of its customers was firehosed with 55 million malicious login attempts last summer.…

14:49

Oracle plays its Trump card: Blushing Big Red gushes over US govt support in Java API battle... just as Larry Ellison holds Donald fundraiser [The Register]

Unfortunate timing – the Obama admin also supported the database giant

The US solicitor general Noel Francisco on Wednesday filed a friend-of-the-court brief in support of Oracle in its Java API copyright lawsuit against Google, scheduled to be argued before the US Supreme Court next month.…

13:52

RSA Conference loses one more abbreviated tech giant after AT&T disconnects over Wuhan coronavirus fears [The Register]

Alternative headline: Killer bio-nasty linked to former alien vault and cyber-hacker gathering

RSA  Yet another big brand has pulled out of RSA Conference, due to take place next week, amid the ongoing novel coronavirus panic.…

12:59

Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 [The Register]

Introducing Project Operations

Microsoft has announced a new application, Dynamics 365 Project Operations, as well as additional AI-driven features for its Dynamics 365 range.…

12:30

Google Cloud embraces GitOps with new Application Manager for Kubernetes [The Register]

Cloud giant aims to attract developers with code-oriented deployment automation

Google's new Application Manager, now in beta, is geared toward simplifying setting up GitOps with Google Kubernetes Engine (GKE) as the target platform.…

11:43

Linux NUMA Patches Aim To Reduce Overhead, Avoid Unnecessary Migrations [Phoronix]

A set of patches that continue to be worked on for the Linux kernek is reconciling NUMA balancing decisions with the load balancer. Ultimately this series is about reducing unnecessary task and page migrations and other NUMA balancing overhead...

11:31

We know what you did last summer: MGM's hotel spinoff lost 10.7m guest records and now they're on hacker forums [The Register]

What happens in Vegas... gets leaked on the internet

Casino and hotel chain MGM Resorts lost almost 10.7 million guest records last summer, including the data of Jack Dorsey and Justin Bieber, which was duly posted to hacker forums.…

11:00

Hey, Brits. Your Google data is leaving the EU before you are: Hoard to be shipped from Ireland to US next month [The Register]

Relax, you won't feel a thing

Google's UK users will see their data shifted to a US-based data controller from the end of next month with the ad giant blaming Brexit for the move.…

10:30

Life in plastic, with a classic: Polymer £20 notes released into wild sporting Turner art [The Register]

Updated cocaine straws will be much harder to forge and hopefully vegan

The Bank of England has started sending out new polymer £20 notes but the old paper ones remain legal tender for now.…

09:49

London's Metropolitan Police flip the switch: Smile, fellow citizens... you're undergoing Live Facial Recognition [The Register]

This is not a test

The Metropolitan Police are using live facial recognition (LFR) in various locations in central London today after spending two years testing the technology.…

09:17

Appy days? Microsoft's Word, Excel and PowerPoint now live under one roof on mobile – but look out for Office 365 popups [The Register]

And that's one hell of a privacy agreement

Microsoft's all-in-one mobile Office app combining Word, Excel and Powerpoint into a single application for iOS and Android is here, but you'll need an Office 365 subscription to use the "premium features."…

08:34

No Huawei gear in vital 5G project to bring virtual-reality Robin Hood to Sherwood Forest [The Register]

Rural trials will not use equipment 'from high risk vendors' says Ministry of Fun

The UK's Department for Digital, Culture, Media and Sport (DCMS, aka the Ministry of Fun) has barred Huawei gear from rural 5G trials.…

08:06

GRU won't believe it: UK and US call out Russia for cyber-attacks on Georgia last year [The Register]

It's APT28 again! Public attribution names and shames state-backed crew

The same Russian state hackers who unleashed NotPetya on the world's computers were behind destructive cyberattacks on Georgia during 2019, the governments of Britain and the US have said – echoing a similar attribution a decade ago.…

07:54

FreeBSD vs. Linux Scaling Up To 128 Threads With The AMD Ryzen Threadripper 3990X [Phoronix]

Last week I looked at the Windows vs. Linux scaling performance on the Threadripper 3990X at varying core/thread counts followed by looking at the Windows 10 performance against eight Linux distributions for this $3990 USD processor running within the System76 Thelio Major workstation. Now the tables have turned for our first look at this 64-core / 128-thread processor running on the BSDs, FreeBSD 12.1 in particular. With this article is looking at the FreeBSD 12.1 performance and seeing how the performance scales compared to Ubuntu 20.04 Linux and the Red Hat Enterprise Linux 8 based CentOS Stream.

07:35

Keen to check for 'abnormal' user behaviours? Microsoft talks insider risk, AWS imports and compliance at infosec shindig RSA [The Register]

Before you remove the mote from thy hacker's eye, remove the beam from the eyes of your, er, Teams

RSA  As IBM's crew cancels their hotel rooms, Microsoft's infosec staffers are still set to attend the decades-old RSA conference and pulled the covers off a raft of security releases and previews for the event today.…

06:43

Yo, Imma let you finish, but for the 6,000 people still using that app on a daily basis ... we have a question: why? [The Register]

Taylor Swift of apps or ultimate ironic hipster shout-out?

In 2014, the world was graced with yet another social network. This one was special. While Facebook and Twitter were grotesquely stodgy beasts, this app stood out with its almost Scandinavian simplicity. It would allow you to message your friends with the word "Yo!" – and that's it.…

06:07

All that Samsung users found on UK website after weird Find my Mobile push notification was... other people's details [The Register]

It's looking rather ominous to us

Following a mysterious "Find my Mobile" push notification this morning, questions are swirling around Samsung after customers found other users's login details being shown to them while trying to change their passwords.…

05:38

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams [The Cloudflare Blog]

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Cloudflare Access secures internal applications without the hassle, slowness or user headache of a corporate VPN. Access brings the experience we all cherish, of being able to access web sites anywhere, any time from any device, to the sometimes dreary world of corporate applications. Teams can integrate the single sign-on (SSO) option, like Okta or AzureAD, that they’ve chosen to use and in doing so make on-premise or self-managed cloud applications feel like SaaS apps.

However, teams consist of more than just the internal employees that share an identity provider. Organizations work with partners, freelancers, and contractors. Extending access to external users becomes a constant chore for IT and security departments and is a source of security problems.

Cloudflare Access removes that friction by simultaneously integrating with multiple identity providers, including popular services like Gmail or GitHub that do not require corporate subscriptions. External users login with these accounts and still benefit from the same ease-of-use available to internal employees. Meanwhile, administrators avoid the burden in legacy deployments that require onboarding and offboarding new accounts for each project.

We are excited to announce two new integrations that make it even easier for organizations to work securely with third parties. Starting today, customers can now add LinkedIn and GitHub Teams as login methods alongside their corporate SSO.

The challenge of sharing identity

If your team has an application that you need to share with partners or contractors, both parties need to agree on a source of identity.

Some teams opt to solve that challenge by onboarding external users to their own identity provider. When contractors join a project, the IT department receives help desk tickets to create new user accounts in the organization directory. Contractors receive instructions on how to sign-up, they spend time creating passwords and learning the new tool, and then use those credentials to login.

This option gives an organization control of identity, but adds overhead in terms of time and cost. The project owner also needs to pay for new SSO seat licenses, even if those seats are temporary. The IT department must spend time onboarding, helping, and then offboarding those user accounts. And the users themselves need to learn a new system and manage yet another password - this one with permission to your internal resources.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Alternatively, other groups decide to “federate” identity. In this flow, an organization will connect their own directory service to their partner’s equivalent service. External users login with their own credentials, but administrators do the work to merge the two services to trust one another.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

While this method avoids introducing new passwords, both organizations need to agree to dedicate time to integrate their identity providers - assuming that those providers can integrate. Businesses then need to configure this setup with each contractor or partner group. This model also requires that external users be part of a larger organization, making it unavailable to single users or freelancers.

Both options must also address scoping. If a contractor joins a project, they probably only need access to a handful of applications - not the entire portfolio of internal tools. Administrators need to invest additional time building rules that limit the scope of user permission.

Additionally, teams need to help guide external users to find the applications they need to do their work. This typically ends up becoming a one-off email that the IT staff has to send to each new user.

Multi-SSO with Cloudflare Access

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Administrators build rules to decide who should be able to reach the tools protected by Access. In turn, when users need to connect to those tools, they are prompted to authenticate with their team’s identity provider. Cloudflare Access checks their login against the list of allowed users and, if permitted, allows the request to proceed.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

With Multi-SSO, this model works the same way but extends that login flow to other sign-in options. When users visit a protected application, they are presented with the identity provider options your team configures. They select their SSO, authenticate, and are redirected to the resource if they are allowed to reach it.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Cloudflare Access can also help standardize identity across multiple providers. When users login, from any provider, Cloudflare Access generates a signed JSON Web Token that contains that user’s identity. That token can then be used to authorize the user to the application itself. Cloudflare has open sourced an example of using this token for authorization with our Atlassian SSO plugin.

Whether the identity providers use SAML, OIDC, or another protocol for sending identity to Cloudflare, Cloudflare Access generates standardized and consistent JWTs for each user from any provider. The token can then be used as a common source of identity for applications without additional layers of SSO configuration.

Onboard contractors seamlessly

With the Multi-SSO feature in Cloudflare Access, teams can onboard contractors in less than a minute without paying for additional identity provider licenses.

Organizations can integrate LinkedIn, GitHub, or Google accounts like Gmail alongside their own corporate identity provider. As new partners join a project, administrators can add single users or groups of users to their Access policy. Contractors and partners can then login with their own accounts while internal employees continue to use the SSO provider already in place.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams


With the Access App Launch, administrators can also skip sending custom emails or lists of links to new contractors and replace them with a single URL. When external users login with LinkedIn, GitHub, or any other provider, the Access App Launch will display only the applications they can reach. In a single view, users can find and launch the tools that they need.

The Access App Launch automatically generates this view for each user without any additional configuration from administrators. The list of apps also updates as permissions are added or removed.

Multi-SSO and Cloudflare Access: Adding LinkedIn and GitHub Teams

Integrate mergers and acquisitions without friction

Integrating a new business after a merger or acquisition is a painful slog. McKinsey estimates that reorganizations like these take 41% longer than planned. IT systems are a frequent, and expensive, reason. According to data from Ernst and Young, IT work represents the third largest one-time integration cost after a merger or acquisition - only beat by real estate and payroll severance.

Cloudflare Access can help cut down on that time. Customers can integrate their existing SSO provider and the provider from the new entity simultaneously, even if both organizations share the same identity provider. For example, users from both groups can continue to login with separate identity services without disruption.

IT departments can then start merging applications or deprecating redundant systems from day one without worrying about breaking the login flow for new users.

Zero downtime SSO migrations

If your organization does not need to share applications with external partners, you can still use Multi-SSO to reduce the friction of migrating between identity providers.

Organizations can integrate both the current and the new provider with Cloudflare Access. As groups within the organization move to the new system, they can select that SSO option in the Cloudflare Access prompt when they connect. Users still on the legacy system can continue to use the provider being replaced until the entire team has completed the cutover.

Regardless of which option users select, Cloudflare Access will continue to capture comprehensive and standard audit logs so that administrators do not lose any visibility into authentication events during the migration.

Getting started

Cloudflare Access’ Multi-SSO feature is available today for more than a dozen different identity providers, including the options for LinkedIn and GitHub Teams announced today. You can follow the instructions here to start securing applications with Cloudflare Access. The first five users are free on all plans, and there is no additional cost to add multiple identity providers.

05:30

Researchers trick Tesla into massively breaking the speed limit by sticking a 2-inch piece of electrical tape on a sign [The Register]

You'd hope it would know 85mph speed limits aren't exactly routine

Vid  A single piece of electrical tape stuck to a 35mph (56kph) road sign is enough to trick the autopilot software in Tesla's vehicles into speeding up to 85mph (136kph).…

05:08

GNOME Shell + Mutter See Changes For Tracking Software Rendering, VNC To Toggle Animations [Phoronix]

GNOME Shell and Mutter saw a set of patches land today for GNOME 3.36 that have been around for a few months and deal with the tracking of software rendering and VNC usage where GNOME Shell should in turn disable animations to ease the rendering workload...

04:47

'An issue of survival': Why Mozilla welcomes EU attempts to regulate the internet giants [The Register]

The web is 'optimised for Chrome, not for independent browsers'

Interview  Mozilla's head of EU public policy, Raegan MacDonald, reckons effective regulation to protect privacy and enable fair competition is an "issue of survival" for Mozilla and other independent companies.…

04:46

Raptor Rolls Out New OpenBMC Firmware With Featureful Web GUI For System Management [Phoronix]

While web-based GUIs for system management on server platforms with BMCs is far from anything new, Raptor Computing Systems with their libre POWER9 systems does now have a full-functioning web-based solution for their OpenBMC-powered systems and still being fully open-source...

04:28

Intel Gen12/Xe Graphics To Support 12-Bit HEVC/VP9 Decode [Phoronix]

We are learning more about the media engine capabilities with the forthcoming Intel "Gen12" (Xe) Tiger Lake graphics...

04:07

How's this for a crossover? Scumbag scammed victims with fake gem mines – then pivoted to fake crypto-mines [The Register]

Not the sort of 'digital transformation' you want to be part of

A bloke has copped to operating a £115m ($149m) scam that managed to encompass physical mining of gems and the virtual mining of cryptocoins.…

03:03

Samsung will be Putin dreaded Kremlin-approved shovelware on its phones, claims Russia [The Register]

Now Ru?

The Russian government, via mouthpiece RIA Novosti, has claimed Korean tech giant Samsung will comply with a controversial Russian law passed in November that forces smartphones and computers to come pre-installed with domestic-made shovelware.…

02:08

Smartwatch owners love their calorie-counting gadgets, but they are verrry expensive [The Register]

Xiaomi the way to the sale rack, would you?

Smartwatch sales have been steadily increasing in recent years, thanks to Apple's efforts, as well as downward pricing pressure from Chinese firms like Xiaomi. And, according to entrail prodders at analyst haus CCS Insight, those who buy them are fairly content.…

00:54

VCs warn: Pumping millions into an AI startup? You mean, pumping millions into Azure, AWS or Google Cloud... [The Register]

And forget SaaS-y upstarts: These machine-learning darlings are more like traditional service outfits

Despite all the hype around artificial intelligence, trendy startups built upon the tech are said to have lower margins than funding-magnet software-as-a-service (SaaS) companies.…

00:19

Linux Will Finally Stop Flickering With AMD Stoney Ridge On 4K Displays [Phoronix]

For those still running the AMD "Stoney Ridge" mobile APUs from 2016 that were launched aside Bristol Ridge with Excavator-based CPU cores and GCN 1.2 graphics, the Linux kernel has a fix finally for flickering issues when driving a 4K display off the APU...

00:03

The great big open-source census: Most-used libraries revealed – plus 10 things developers should be doing to keep their code secure [The Register]

Linux Foundation hears your gripes about naming schemes, legacy code, and more

With modern applications now composed of 80 to 90 per cent Free and Open Source Software (FOSS), the Linux Foundation and Laboratory for Innovation Science at Harvard University (LISH) on Wednesday published their second open-source census to promote better security and code management practices.…

Wednesday, 19 February

22:58

Galileo got it wrong – official: Jupiter actually wet, not super-dry: 'No one would have guessed that water might be so variable across the planet' [The Register]

The 1990s spacecraft, that is

Jupiter contains more water than a previous study suggested, according to recordings from NASA's Juno probe, which were published in Nature Astronomy this month.…

22:01

Rav1e 0.3.1 Is 25~40% Faster At Low Speed Levels For Rust-Based AV1 Encoding [Phoronix]

It was not even two full weeks ago that Rav1e 0.3 was released with speed optimizations and other AV1 encoding enhancements while released on Tuesday was Rav1e 0.3.1 with a change to boost encode speeds at lower levels...

19:07

Chrome deploys deep-linking tech in latest browser build despite privacy concerns [The Register]

It's not a bug, it's a feature, explains the Chocolate Factory

Google has implemented a browser capability in Chrome called ScrollToTextFragment that enables deep links to web documents, but it has done so despite unresolved privacy concerns and lack of support from other browser makers.…

18:33

Forcing us to get consent before selling browser histories violates our free speech, US ISPs claim [The Register]

That ain't the way life should be, Maine responds

The US state of Maine is violating internet broadband providers' free speech by forcing them to ask for their customers’ permission to sell their browser history, according to a new lawsuit.…

18:00

NVIDIA Posts Firmware Needed For Open-Source GeForce 16 Series Acceleration [Phoronix]

As written about last week, in the works for the Linux 5.7 kernel this spring is open-source NVIDIA "Nouveau" acceleration for the GeForce 16 series. That code is currently sitting in the Nouveau development tree until landing in DRM-Next for Linux 5.7, but NVIDIA has now posted the necessary firmware binaries needed for enabling the hardware acceleration on these Turing GPUs...

17:00

Accelerating Retention Experiments with Partially Observed Data [Yelp Engineering and Product Blog]

Summary Here at Yelp, we generate business wins and a better platform by running A/B tests to measure the revenue impact of different user and business experience interventions. Accurately estimating key revenue indicators, such as the probability a customer retains at least \(n\)-days (\(n\)-day retention) or the expected dollar amount a customer spends over their first \(n\) days (\(n\)-day spend) is core to this experimentation process. Historically at Yelp, \(n\)-day customer or user retention was typically estimated as the proportion of customers/users we observed for more than \(n\) days who retained more than \(n\) days. Similarly, \(n\)-day spend was estimated...

16:41

Oi, Cisco! Who left the 'high privilege' login for Smart Software Manager just sitting out in the open? [The Register]

Critical fix for static credential headlines latest patch rollout

Cisco has released fixes to address 17 vulnerabilities across its networking and unified communications lines.…

16:15

Mesa 20.0 Released With Big Improvements For Intel, AMD Radeon Vulkan/OpenGL [Phoronix]

Mesa 20.0 is now released as the first quarter 2020 update to the Mesa 3D open-source graphics driver stack...

13:16

LLVM Adds MLIR-Vulkan-Runner To Run MLIR On Vulkan-Enabled GPUs [Phoronix]

Added to the LLVM source tree today is mlir-vulkan-runner as a new utility for testing with some interesting possibilities...

11:20

Android 11 Developer Preview Shows Off New 5G APIs, Security Hardening, HDMI Low-Latency [Phoronix]

Google has made their first public developer preview release of the forthcoming Android 11...

11:06

GNOME 3.34.4 Released With Many Bug Fixes [Phoronix]

While GNOME 3.36 will be released in just a few weeks, GNOME 3.34.4 is out today as the latest stable update in the current series...

09:23

Mesa 20.0 Is Imminent With New Intel OpenGL Default, Intel + RADV Vulkan 1.2, OpenGL 4.6 For RadeonSI [Phoronix]

With the release of Mesa 20.0 being imminent, here is a look at all of the new features for this first quarter update to the Mesa 3D stack for open-source OpenGL/Vulkan drivers.

08:23

Oracle Ships Solaris 11.4 SRU18 - Finally Mitigates The SWAPGS Vulnerability [Phoronix]

Oracle today has released Solaris 11.4 SRU18 as the newest version of the long-running Solaris 11.4 series...

07:05

AMD Announces EPYC 7532 + EPYC 7662 As Newest Rome Processors [Phoronix]

AMD has expanded their 7002 series "Rome" family with the availability today of the EPYC 7662 as their latest 64-core / 128-thread offering and the EPYC 7532 as a new 32-core part but with a full 256MB cache to offer more per-core L3 cache than other 32-core processors...

06:30

Saturday Morning Breakfast Cereal - Radical [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
You think all my twitter observations just write themselves?!


Today's News:

04:52

LibreOffice 7 Continues Plumbing Its Vulkan Rendering Support [Phoronix]

Landing last November in the LibreOffice development code was Skia drawing support to replace Cairo and in turn that opens up for Vulkan rendering of this cross-platform, open-source office suite...

02:32

LLVM Clang 11 Adds -std=c++20 Support [Phoronix]

With C++20 now being deemed complete from the recent ISO C++ meeting in Prague, the GNU Compiler Collection went ahead and added the -std=c++20 flag where as up until that change this weekend relied upon the -std=c++2a switch. LLVM's Clang compiler now has similar treatment on its codebase...

02:00

Fedora at the Czech National Library of Technology [Fedora Magazine]

Where do you turn when you have a fleet of public workstations to manage? If you’re the Czech National Library of Technology (NTK), you turn to Fedora. Located in Prague, the NTK is the Czech Republic’s largest science and technology library. As part of its public service mission, the NTK provides 150 workstations for public use.

In 2018, the NTK moved these workstations from Microsoft Windows to Fedora. In the press release announcing this change, Director Martin Svoboda said switching to Fedora will “reduce operating system support costs by about two-thirds.” The choice to use Fedora was easy, according to NTK Linux Engineer Miroslav Brabenec. “Our entire Linux infrastructure runs on RHEL or CentOS. So for desktop systems, Fedora was the obvious choice,” he told Fedora Magazine.

User reception

Changing an operating system is always a little bit risky—it requires user training and outreach. Brabenec said that non-IT staff asked for training on the new system. Once they learned that the same (or compatible) software was available, they were fine.

The Library’s customers were on board right away. The Windows environment was based on thin client terminals, which were slow for intensive tasks like video playback and handling large office suite files. The only end-user education that the NTK needed to create was a basic usage guide and a desktop wallpaper that pointed to important UI elements.

User guidance desktop wallpaper from the National Technology Library.

Although Fedora provides development tools used by the Faculty of Information Technology at the Czech Technical University—and many of the NTK’s workstation users are CTU students—most of the application usage is what you might expect of a general-purpose workstation. Firefox dominates the application usage, followed by the Evince PDF viewer,  and the LibreOffice suite.

Updates

NTK first deployed the workstations with Fedora 28. They decided to skip Fedora 29 and upgraded to Fedora 30 in early June 2019. The process was simple, according to Brabenec. “We prepared configuration, put it into Ansible. Via AWX I restarted all systems to netboot, image with kickstart, after first boot called provisioning callback on AWX, everything automatically set up via Ansible.”

Initially, they had difficulties applying updates. Now they have a process for installing security updates daily. Each system is rebooted approximately every two weeks to make sure all of the updates get applied.

Although he isn’t aware of any concrete plans for the future, Brabenec expects the NTK to continue using Fedora for public workstations. “Everyone is happy with it and I think that no one has a good reason to change it.”

Tuesday, 18 February

09:45

Monday, 17 February

09:14

Saturday Morning Breakfast Cereal - Astronaut [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The brave men of Apollo had to poop in tiny plastic baggies, back when men were men.


Today's News:

07:09

How to get MongoDB Server on Fedora [Fedora Magazine]

Mongo (from “humongous”) is a high-performance, open source, schema-free document-oriented database, which is one of the most favorite so-called NoSQL databases. It uses JSON as a document format, and it is designed to be scalable and replicable across multiple server nodes.

Story about license change

It’s been more than a year when the upstream MongoDB decided to change the license of the Server code. The previous license was GNU Affero General Public License v3 (AGPLv3). However, upstream wrote a new license designed to make companies running MongoDB as a service contribute back to the community. The new license is called Server Side Public License (SSPLv1) and more about this step and its rationale can be found at MongoDB SSPL FAQ.

Fedora has always included only free (as in “freedom”) software. When SSPL was released, Fedora determined that it is not a free software license in this meaning. All versions of MongoDB released before the license change date (October 2018) could be potentially kept in Fedora, but never updating the packages in the future would bring security issues. Hence the Fedora community decided to remove the MongoDB server entirely, starting Fedora 30.

What options are left to developers?

Well, alternatives exist, for example PostgreSQL also supports JSON in the recent versions, and it can be used in cases when MongoDB cannot be used any more. With JSONB type, indexing works very well in PostgreSQL with performance comparable with MongoDB, and even without any compromises from ACID.

The technical reasons that a developer may have chosen MongoDB did not change with the license, so many still want to use it. What is important to realize is that the SSPL license was only changed to the MongoDB server. There are other projects that MongoDB upstream develops, like MongoDB tools, C and C++ client libraries and connectors for various dynamic languages, that are used on the client side (in applications that want to communicate with the server over the network). Since the license is kept free (Apache License mostly) for those packages, they are staying in Fedora repositories, so users can use them for the application development.

The only change is really the server package itself, which was removed entirely from Fedora repos. Let’s see what a Fedora user can do to get the non-free packages.

How to install MongoDB server from the upstream

When Fedora users want to install a MongoDB server, they need to approach MongoDB upstream directly. However, the upstream does not ship RPM packages for Fedora itself. Instead, the MongoDB server is either available as the source tarball, that users need to compile themselves (which requires some developer knowledge), or Fedora user can use some compatible packages. From the compatible options, the best choice is the RHEL-8 RPMs at this point. The following steps describe, how to install them and how to start the daemon.

1. Create a repository with upstream RPMs (RHEL-8 builds)


$ sudo cat > /etc/yum.repos.d/mongodb.repo &lt;&lt;EOF
[mongodb-upstream]
name=MongoDB Upstream Repository
baseurl=https://repo.mongodb.org/yum/redhat/8Server/mongodb-org/4.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc
EOF

2. Install the meta-package, that pulls the server and tools packages


$ sudo dnf install mongodb-org
&lt;snipped>
Installed:
  mongodb-org-4.2.3-1.el8.x86_64           mongodb-org-mongos-4.2.3-1.el8.x86_64  
  mongodb-org-server-4.2.3-1.el8.x86_64    mongodb-org-shell-4.2.3-1.el8.x86_64
  mongodb-org-tools-4.2.3-1.el8.x86_64          

Complete!

3. Start the MongoDB daemon


$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
   Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2020-02-08 12:33:45 EST; 2s ago
     Docs: https://docs.mongodb.org/manual
  Process: 15768 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15769 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15770 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15771 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 15773 (mongod)
   Memory: 70.4M
      CPU: 611ms
   CGroup: /system.slice/mongod.service
           └─15773 /usr/bin/mongod -f /etc/mongod.conf

4. Verify that the server runs by connecting to it from the mongo shell


$ mongo
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&amp;gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("20b6e61f-c7cc-4e9b-a25e-5e306d60482f") }
MongoDB server version: 4.2.3
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
    http://docs.mongodb.org/
---

> _

That’s all. As you see, the RHEL-8 packages are pretty compatible and it should stay that way for as long as the Fedora packages remain compatible with what’s in RHEL-8. Just be careful that you comply with the SSPLv1 license in your use.

Sunday, 16 February

10:05

Saturday Morning Breakfast Cereal - Mug [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
He tries to steal the phone but it's a first generation android.


Today's News:

Saturday, 15 February

08:54

Saturday Morning Breakfast Cereal - Socializing [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you send hatemail about this, it's proof you were poorly socialized.


Today's News:

Friday, 14 February

09:40

Saturday Morning Breakfast Cereal - Kid Time [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
This is the closest to autiobiography I will ever get


Today's News:

01:00

PHP Development on Fedora with Eclipse [Fedora Magazine]

Eclipse is a full-featured free and open source IDE developed by the Eclipse Foundation. It has been around since 2001. You can write anything from C/C++ and Java to PHP, Python, HTML, JavaScript, Kotlin, and more in this IDE.

Installation

The software is available from Fedora’s official repository. To install it, invoke:

sudo dnf install eclipse

This will install the base IDE and Eclipse platform, which enables you to develop Java applications. In order to add PHP development support to the IDE, run this command:

sudo dnf install eclipse-pdt

This will install PHP development tools like PHP project wizard, PHP server configurations, composer support, etc.

Features

This IDE has many features that make PHP development easier. For example, it has a comprehensive project wizard (where you can configure many options for your new projects). It also has built-in features like composer support, debugging support, a browser,a terminal, and more.

Sample project

Now that the IDE is installed, let’s create a simple PHP project. Go to File →New → Project. From the resulting dialog, select PHP project. Enter a name for your project. There are some other options you might want to change, like changing the project’s default location, enabling JavaScript, and changing PHP version. See the following screenshot.

Create A New PHP Project in Eclipse

You can click the Finish button to create the project or press Next to configure other options like adding include and build paths. You don’t need to change those in most cases.

Once the project is created, right click on the project folder and select New → PHP File to add a new PHP file to the project. For this tutorial I named it index.php, the conventionally-recognized default file in every PHP project.

Then add the your code to the new file.

Demo PHP code

In the example above, I used CSS, JavaScript, and PHP tags on the same page mainly to show that the IDE is capable of supporting all of them together.

Once your page is ready, you can see the result output by moving the file to your web server document root or by creating a development PHP server in the project directory.

Thanks to the built-in terminal in Eclipse, we can launch a PHP development server right from within the IDE. Simply click the terminal icon on the toolbar (Terminal Icon) and click OK. In the new terminal, change to the project directory and run the following command:

php -S localhost:8080 -t . index.php 
Terminal output

Now, open a browser and head over to http://localhost:8080. If everything has been done correctly per instructions and your code is error-free, you will see the output of your PHP script in the browser.

PHP output in Fedora

Thursday, 13 February

09:58

Saturday Morning Breakfast Cereal - Bound [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I'm pretty sure the venn diagram for this joke is two non-overlapping circles. But, enjoy!


Today's News:

Wednesday, 12 February

08:50

Saturday Morning Breakfast Cereal - Operations [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
We don't need to learn what it IS as long as we can remember 8 or 9 specific rules for various situations.


Today's News:

Tuesday, 11 February

17:00

Yelp Takes on Grace Hopper 2019! [Yelp Engineering and Product Blog]

Last October we sent a group of Yelpers to the 2019 Grace Hopper Celebration! Here are a few takeaways and reflections from some of our attendees. Who attended? Surashree K., software engineer on Semantic Business Information Clara M., product design lead on Content Anna F., machine learning engineer on Semantic Business Information Nikunja G., software engineer on Infrastructure Security Catlyn K., software engineer on Stream Processing What was your favorite session? Surashree: Honestly, it’s hard to choose, but the one that stuck with me was the talk by Jackie Tsay and Matthew Dierker on Google’s Smart Compose, the Gmail feature...

12:23

The Serverlist: Globally Distributed Websites, $16M Series A, and more [The Cloudflare Blog]

The Serverlist: Globally Distributed Websites, $16M Series A, and more

Check out our twelfth edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.

Sign up below to have The Serverlist sent directly to your mailbox.

09:09

Saturday Morning Breakfast Cereal - Language [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Have you noticed that you can take a sensation, then add any swear word of any kind, and the swear word intelligibly acts as emphasis?


Today's News:

Monday, 10 February

11:18

Vetflare, Cloudflare's Military Veteran Employee Group Launches [The Cloudflare Blog]

Vetflare, Cloudflare's Military Veteran Employee Group Launches
Vetflare, Cloudflare's Military Veteran Employee Group Launches

“Diversity leads to better outcomes… better decisions, increased innovation, stronger financial returns, and a great place to work for everyone” said Janet Van Huysse, Head of People at Cloudflare during our Q1-2020 kickoff. Veterans, people who have served in the military, are a vital element of a diverse workforce. We come in diverse shapes, sizes, colors, genders, and orientations. We bring diverse skillsets, experiences, and perspectives.  

If you haven’t served in the military and haven’t worked with many veterans, here are some of the things that you can expect from your colleagues or direct reports that are veterans.

Veterans know what it means to SERVE. Indeed, it is a truism that living in service to others is a life well-lived, and that service to others is a foundation of esprit de corps. Though relatively few of us have seen combat, we have all signed a blank check to our nation made payable for any amount, up to and including our lives. This is what it means to become part of something bigger than oneself. This translates to putting our common shared interests ahead of our personal interests even when that means becoming an instrument of a foreign policy we might not agree with.  

Veterans know what it means to be part of a TEAM. The phrase “I’ve got your back” means a lot when it comes from a veteran because they’re referring to the blank check. Just about every veteran you ask will tell you they really miss being part of something bigger than themselves. Companies and organizations in the civilian world that can connect the dots in this way, like Cloudflare’s mission to help build a better internet, unlock the magic that accomplishes the seemingly impossible. We see this at Cloudflare in the incredible pace of product releases AND product improvements. We see this at Cloudflare when people go to the mat for their customers and when people come together to fix a problem.  

Veterans know what it means to focus on a MISSION. When people have bought into the mission, everything and everyone aligns to achieve it. We know that together, as part of a team, with solid leadership, strategy, and tactics we can accomplish the mission. Veterans will help you drop things that are extraneous to the mission and help you focus on the things that will get the mission accomplished. When a veteran on your team asks, “What problem are we trying to solve?” or “Why are we doing this?” you can bet a paycheck that they’re trying to draw a straight line to the goal of the mission.  

Veterans know what it means to COMMIT. Most people view the military as a top-down, hierarchical organization because, well... it is.  But most people don’t realize the level of consensus-driven decision-making that happens prior to an order being given. “Because I told you so” is just not enough of a reason for people to risk their lives or for them to effectively execute their part of a mission. So the military involves their people in mission planning where alternatives are thrashed out, often with great conviction. But when time is up and the mission commander makes their call on how the mission will be carried out, veterans know it’s time to put aside their personal opinions, get onboard, and do whatever it takes to make the plan successful. Jeff Bezos famously calls this “disagree and commit” and veterans are well-practiced in this skill.

Veterans know the importance of MORALE. We’ve seen the unit with everything going for it fail, and we’ve seen the underdog come out on top. We’ve seen troubled units turn themselves around, seemingly overnight. Veterans know how the days drag on endlessly when morale is low, and we know the joy that comes from playing their part in a group that is proud to be doing what they’re doing.

Veterans know how to make DIVERSITY work. We had to because we had no choice in who we worked with in the military. Every year one-third of the people in our units left and new people showed up out of the blue. So veterans get good at onboarding themselves into new organizations and onboarding new people to their teams. Veterans get good at figuring out what people have to offer and where they have gaps so the team can reshape itself to maximize performance.  

If you’re a veteran reading this, know that Cloudflare has a seat at the table for you. This can be your opportunity to transition into the civilian world, transition into tech, or accelerate your career in tech at a rocket-ship that appreciates what you have to offer.  

Supporting veterans is distinct from supporting their country’s foreign policy. Most Americans recognize the mistake we made in not welcoming home veterans of the Vietnam War because we didn’t support the war at-large. Nowadays, “thank you for your service” is a meaningful phrase most veterans hear with some regularity and I’m here to tell you that it means a lot. And it especially means a lot to those veterans who carry the lifelong burden of combat action.

So we Cloudflarians that are also veterans also want to say thank you to all of YOU for welcoming us into this company, this culture, and this team that is doing so much more than helping to build a better internet.  We are proud and grateful to serve alongside you at CLOUDFLARE.

01:00

Playing Music on your Fedora Terminal with MPD and ncmpcpp [Fedora Magazine]

MPD, as the name implies, is a Music Playing Daemon. It can play music but, being a daemon, any piece of software can interface with it and play sounds, including some CLI clients.

One of them is called ncmpcpp, which is an improvement over the pre-existing ncmpc tool. The name change doesn’t have much to do with the language they’re written in: they’re both C++, but ncmpcpp is called that because it’s the NCurses Music Playing Client Plus Plus.

Installing MPD and ncmpcpp

The ncmpmpcc client can be installed from the official Fedora repositories with DNF directly with

$ sudo dnf install ncmpcpp

On the other hand, MPD has to be installed from the RPMFusion free repositories, which you can enable, as per the official installation instructions, by running

$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm

and then you can install MPD by running

$ sudo dnf install mpd

Configuring and Starting MPD

The most painless way to set up MPD is to run it as a regular user. The default is to run it as the dedicated mpd user, but that causes all sorts of issues with permissions.

Before we can run it, we need to create a local config file that will allow it to run as a regular user.

To do that, create a subdirectory called mpd in ~/.config:

$ mkdir ~/.config/mpd

copy the default config file into this directory:

$ cp /etc/mpd.conf ~/.config/mpd

and then edit it with a text editor like vim, nano or gedit:

$ nano ~/.config/mpd/mpd.conf

I recommend you read through all of it to check if there’s anything you need to do, but for most setups you can delete everything and just leave the following:

db_file "~/.config/mpd/mpd.db" 
log_file "syslog"

At this point you should be able to just run

$ mpd

with no errors, which will start the MPD daemon in the background.

Using ncmpcpp

Simply run

$ ncmpcpp

and you’ll see a ncurses-powered graphical user interface in your terminal.

Press 4 and you should see your local music library, be able to change the selection using the arrow keys and press Enter to play a song.

Doing this multiple times will create a playlist, which allows you to move to the next track using the > button (not the right arrow, the > closing angle bracket character) and go back to the previous track with <. The + and – buttons increase and decrease volume. The Q button quits ncmpcpp but it doesn’t stop the music. You can play and pause with P.

You can see the current playlist by pressing the 1 button (this is the default view). From this view you can press i to look at the information (tags) about the current song. You can change the tags of the currently playing (or paused) song by pressing 6.

Pressing the \ button will add (or remove) an informative panel at the top of the view. In the top left, you should see something that looks like this:

[------]

Pressing the r, z, y, R, x buttons will respectively toggle the repeat, random, single, consume and crossfade playback modes and will replace one of the characters in that little indicator to the initial of the selected mode.

Pressing the F1 button will display some help text, which contains a list of keybindings, so there’s no need to write a complete list here. So now go on, be geeky, and play all your music from your terminal!

Sunday, 09 February

07:57

Saturday Morning Breakfast Cereal - Mars [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Having done a lot of space science reading lately, let me say I'm deeply ashamed of that unrealistic settlement cylinder. But, if I buried it under regolith it'd just be a hill, and I AM AN ARTIST.


Today's News:

Saturday, 08 February

10:09

Saturday Morning Breakfast Cereal - Love Ironically [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I think I've known actual people who are planning to do this.


Today's News:

01:00

Contribute at the Fedora Test Week for Kernel 5.5 [Fedora Magazine]

The kernel team is working on final integration for kernel 5.5. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, February 10, 2020 through Monday, February 17, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you in the Test Week.

Friday, 07 February

08:10

Saturday Morning Breakfast Cereal - Entangled [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Did I mention it's made of all natural fermions?


Today's News:

01:00

Connect Fedora to your Android phone with GSConnect [Fedora Magazine]

Both Apple and Microsoft offer varying levels of integration of their desktop offerings with your mobile devices. Fedora offers a similar if not greater degree of integration with GSConnect. It lets you pair your Android phone with your Fedora desktop and opens up a lot of possibilities. Keep reading to discover more about what it is and how it works.

What is GSConnect?

GSConnect is an implementation of the KDE Connect project tailored for the GNOME desktop. KDE Connect makes it possible for your devices to communicate with each other. However, installing it on Fedora’s default GNOME desktop requires pulling in a large number of KDE dependencies.

GSConnect is a complete implementation of KDE Connect, but in the form of a GNOME shell extension. Once installed, GSConnect lets you do the following and a lot more:

  • Receive phone notifications on your desktop and reply to messages
  • Use your phone as a remote control for your desktop
  • Share files and links between devices
  • Check your phone’s battery level from the desktop
  • Ring your phone to help find it

Setting up the GSConnect extension

Setting up GSConnect requires installing two components: the GSConnect extension on your desktop and the KDE Connect app on your Android device.

First, install the GSConnect extension from the GNOME Shell extensions website: GSConnect. (Fedora Magazine has a handy article on How to install a GNOME Shell extension to help you with this step.)

The KDE Connect app is available on Google’s Play Store. It’s also available on the FOSS Android apps repository, F-Droid.

Once you have installed both these components, you can pair your two devices. Installing the extension makes it show up in your system menu as Mobile Devices. Clicking on it displays a drop down menu, from which you can access Mobile Settings.

GSConnect menu within system menu

Here’s where you can view your paired devices and manage the features offered by GSConnect. Once you are on this screen, launch the app on your Android device.

You can initiate pairing from either device, but here you’ll be connecting to your desktop from the Android device. Simply hit refresh on the app, and as long as both devices are on the same wireless network, your desktop shows up in your Android device. You can now send a pair request to the desktop. Accept the pair request on your desktop to complete the pairing.

Pair request from Android app to desktop

Using GSConnect

Once paired, you’ll need to grant permissions on your Android device to make use of the many features available on GSConnect. Click on the paired device in the list of devices to see all available functions and enable or disable them according to your preferences.

GSConnect device preferences

Remember that you’ll also need to grant corresponding permissions in the Android app to be able to use these functions. Depending upon the features you’ve enabled and the permissions you’ve granted, you can now access your mobile contacts on your desktop, get notified of messages and reply to them, and even sync the desktop and Android device clipboards.

Integration with Files and your web browsers

GSConnect allows you to directly send files to your Android device from your desktop file explorer’s context menu.

On Fedora’s default GNOME desktop, you will need to install the nautilus-python package in order to make your paired devices show up in the context menu. Installing this is as straightforward as running the following command from your preferred terminal:

$ sudo dnf install nautilus-python

Once done, the Send to Mobile Device entry appears in the context menu of the Files app.

Context menu entry to send file to mobile device

Similarly, install the corresponding WebExtension for your browser, be it Firefox or Chrome, to send links to your Android device. You have the option to send the link to launch directly in your browser or to deliver it as SMS.

Running Commands

GSConnect lets you define commands which you can then run on your desktop, from your remote device. This allows you to do things such as take a screenshot of your desktop, or lock and unlock your desktop from your Android device, remotely.

Define commands to be run from the mobile device, on the desktop

To make use of this feature, you can use standard shell commands and the CLI exposed by GSConnect. Documentation on this is provided in the project’s GitHub repository: CLI Scripting.

The KDE UserBase Wiki has a list of example commands. These examples cover controlling the brightness and volume on your desktop, locking the mouse and keyboard, and even changing the desktop theme. Some of the commands are specific for KDE Plasma, and modifications are necessary to make it run on the GNOME desktop.

Explore and have fun

GSConnect makes it possible to enjoy a great degree of convenience and comfort. Dive into the preferences to see all that you can do and get creative with the commands function. Feel free to share all the possibilities this utility unlocked in your workflow in the comments below.


Photo by Pathum Danthanarayana on Unsplash.

Thursday, 06 February

17:00

09:29

Saturday Morning Breakfast Cereal - Adam's Temptation [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Though honestly, where do you grow up that you just trust a talking serpent?


Today's News:

Wednesday, 05 February

17:00

Open-Sourcing Varanus and Rusty Jetpack [Yelp Engineering and Product Blog]

Varanus The monitor lizards are large lizards in the genus Varanus. Some time ago, our Android app got into a loop of sending data, due to some unlikely interactions between several different systems, which briefly overwhelmed our servers before we were able to turn it off. Fortunately, key code was behind an experiment. Otherwise, apps could have continued misbehaving for days, as there is no guarantee users would immediately update the app. It took an unusual combination of circumstances for this to happen, but this kind of problem seems to be a pervasive concern across the industry, and there are...

09:48

Saturday Morning Breakfast Cereal - Crisis [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
We'll also be adding some imitation plants to various interior surfaces.


Today's News:

Tuesday, 04 February

08:57

Saturday Morning Breakfast Cereal - Irrational [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
In case anyone reading the votey comic is about to become a moon hoaxer, the explanation is that there was a goddamn rod holding it up.


Today's News:

Monday, 03 February

12:53

The journey to fast production asset builds with Webpack [Code as Craft]

Etsy has switched from using a RequireJS-based JavaScript build system to using Webpack. This has been a crucial cornerstone in the process of modernizing Etsy’s JavaScript ecosystem. We’ve learned a lot from addressing our multiple use-cases during the migration, and this post is the first of two parts documenting our learnings. Here, we specifically cover the production use-case — how we set up Webpack to build production assets in a way that meets our needs.

We’re proud to say that our Webpack-powered build system, responsible for over 13,200 assets and their source maps, finishes in four minutes on average. This fast build time is the result of countless hours of optimizing. What follows is our journey to achieving such speed, and what we’ve discovered along the way.

Production Expectations

One of the biggest challenges of migrating to Webpack was achieving production parity with our pre-existing JavaScript build system, named Builda. It was built on top of the RequireJS optimizer, a build tool predating Webpack, with extensive customization to speed up builds and support then-nascent standards like JSX. Supporting Builda became more and more untenable, though, with each custom patch we added to support JavaScript’s evolving standards. By early 2018, we consequently decided to switch to Webpack; its community support and tooling offered a sustainable means to keep up with JavaScript modernization. However, we had spent many years optimizing Builda to accommodate our production needs. Assets built by Webpack would need to have 100% functional parity with assets built by Builda in order for us to have confidence around switching.

Our primary expectation for any new build system is that it takes less than five minutes on average to build our production assets. Etsy is one of the earliest and biggest proponents of continuous deployment, where our fast build/deploy pipeline allows us to ship changes up to 30 times a day. Builda was already meeting this expectation, and we would negatively impact our deployment speed/frequency if a new system took longer to build our JavaScript. Build times, however, tend to increase as a codebase grows, productionization becomes more complex, and available computing resources are maxed out.

At Etsy, our frontend consists of over 12,000 modules that eventually get bundled into over 1,200 static assets. Each asset needs to be localized and minified, both of which are time-consuming tasks. Furthermore, our production asset builds were limited to using 32 CPU cores and 64GB of RAM. Etsy had not yet moved to the cloud when we started migrating to Webpack, and these specs were of the beefiest on-premise hosts available. This meant we couldn’t just add more CPU/RAM to achieve faster builds.

So, to recap:

  • Our frontend consists of over 1,200 assets made up of over 12,000 modules.
  • Each asset needs to be localized and minified as part of productionization.
  • We are limited to 32 CPU cores and 64GB of RAM.
  • Production asset builds need to finish in less than five minutes on average.

We got this.

Localization

From the start, we knew that localization would be a major obstacle to achieving sub-five-minute build times. Localization strings are embedded in our JavaScript assets, and at Etsy we officially support eleven locales. This means we need to produce eleven copies of each asset, where each copy contains localization strings of a specific locale. Suddenly, building over 1,200 assets balloons into building over 1,200 × 11 = 13,200 assets.

General caching solutions help reduce build times, idempotent of localization’s multiplicative factor. After we solved the essential problems of resolving our module dependencies and loading our custom code with Webpack, we incorporated community solutions like cache-loader and babel-loader’s caching options. These solutions cache intermediary artifacts of the build process, which can be time-consuming to calculate. As a result, asset builds after the initial one finish much faster. Still though, we needed more than caching to build localized assets quickly.

One of the first search results for Webpack localization was the now-deprecated i18n-webpack-plugin. It expects a separate Webpack configuration for each locale, leading to a separate production asset build per locale. Even though Webpack supports multiple configurations via its MultiCompiler mode, the documentation crucially points out that “each configuration is only processed after the previous one has finished processing.” At this stage in our process, we measured that a single production asset build without minification was taking ~3.75 minutes with no change to modules and a hot cache (a no-op build). It would take us ~3.75 × 11 = ~41.25 minutes to process all localized configurations for a no-op build.

We also ruled out using this plugin with a common solution like parallel-webpack to process configurations in parallel. Each parallel production asset build requires additional CPU and RAM, and the sum far exceeded the 32 CPU cores and 64GB of RAM available. Even when we limited the parallelism to stay under our resource limits, we were met with overall build times of ~15 minutes for a no-op build. It was clear we need to approach localization differently.

Localization inlining

To localize our assets, we took advantage of two characteristics about our localization. First, the way we localize our JavaScript code is through a module abstraction. An engineer defines a module that contains only key-value pairs. The value is the US-English version of the text that needs to be localized, and the key is a succinct description of the text. To use the localized strings, the engineer imports the module in their source code. They then have access to a function that, when passed a string corresponding to one of the keys, returns the localized value of the text.

example of how we include localizations in our JavaScript

For a different locale, the message catalog contains analogous localization strings for the locale. We programmatically handle generating analogous message catalogs with a custom Webpack loader that applies whenever Webpack encounters an import for localizations. If we wanted to build Spanish assets, for example, the loader would look something like this:

example of how we would load Spanish localizations into our assets

Second, once we build the localized code and output localized assets, the only differing lines between copies of the same asset from different locales are the lines with localization strings; the rest are identical. When we build the above example with English and Spanish localizations, the diff of the resulting assets confirms this:

diff of the localized copies of an asset

Even when caching intermediary artifacts, our Webpack configuration would spend over 50% of the overall build time constructing the bundled code of an asset. If we provided separate Webpack configurations for each locale, we would repeat this expensive asset construction process eleven times.

diagram of running Webpack for each locale

We could never finish this amount of work within our build-time constraints, and as we saw before, the resulting localized variants of each asset would be identical except for the few lines with localizations. What if, rather than locking ourselves into loading a specific locale’s localization and repeating an asset build for each locale, we returned a placeholder where the localizations should go?

code to load placeholders in place of localizations

We tried this placeholder loader approach, and as long as it returned syntactically valid JavaScript, Webpack could continue with no issue and generate assets containing these placeholders, which we call “sentinel assets”. Later on in the build process a custom plugin takes each sentinel asset, finds the placeholders, and replaces them with corresponding message catalogs to generate a localized asset.

diagram of our build process with localization inlining

We call this approach “localization inlining”, and it was actually how Builda localized its assets too. Although our production asset builds write these sentinel assets to disk, we do not serve them to users. They are only used to derive the localized assets.

With localization inlining, we were able to generate all of our localized assets from one production asset build. This allowed us to stay within our resource limits; most of Webpack’s CPU and RAM usage is tied to calculating and generating assets from the modules it has loaded. Adding additional files to be written to disk does not increase resource utilization as much as running an additional production asset build does.

Now that a single production asset build was responsible for over 13,200 assets, though, we noticed that simply writing this many assets to disk substantially increased build times. It turns out, Webpack only uses a single thread to write a build’s assets to disk. To address this bottleneck, we included logic to write a new localized asset only if the localizations or the sentinel asset have changed — if neither have changed, then the localized asset hasn’t changed either. This optimization greatly reduced the amount of disk writing after the initial production asset build, allowing subsequent builds with a hot cache to finish up to 1.35 minutes faster. A no-op build without minification consistently finished in ~2.4 minutes. With a comprehensive solution for localization in place, we then focused on adding minification.

Minification

Out of the box, Webpack includes the terser-webpack-plugin for asset minification. Initially, this plugin seemed to perfectly address our needs. It offered the ability to parallelize minification, cache minified results to speed up subsequent builds, and even extract license comments into a separate file.

When we added this plugin to our Webpack configuration, though, our initial asset build suddenly took over 40 minutes and used up to 57GB of RAM at its peak. We expected the initial build to take longer than subsequent builds and that minification would be costly, but this was alarming. Enabling any form of production source maps also dramatically increased the initial build time by a significant amount. Without the terser-webpack-plugin, the initial production asset build with localizations would finish in ~6 minutes. It seemed like the plugin was adding an unknown bottleneck to our builds, and ad hoc monitoring with htop during the initial production asset build seemed to confirmed our suspicions:

htop during minification

At some points during the minification phase, we appeared to only use a single CPU core. This was surprising to us because we had enabled parallelization in terser-webpack-plugin’s options. To get a better understanding of what was happening, we tried running strace on the main thread to profile the minification phase:

strace during minification

At the start of minification, the main thread spent a lot of time making memory syscalls (mmap and munmap). Upon closer inspection of terser-webpack-plugin’s source code, we found that the main thread needed to load the contents of every asset to generate parallelizable minification jobs for its worker threads. If source maps were enabled, the main thread also needed to calculate each asset’s corresponding source map. These lines explained the flood of memory syscalls we noticed at the start.

Further into minification, the main thread started making recvmsg and write syscalls to communicate between threads. We corroborated these syscalls when we found that the main thread needed to serialize the contents of each asset (and source maps if they were enabled) to send it to a worker thread to be minified. After receiving and deserializing a minification result received from a worker thread, the main thread was also solely responsible for caching the result to disk. This explained the stat, open, and other write syscalls we observed because the Node.js code promises to write the contents. The underlying epoll_wait syscalls then poll to check when the writing finishes so that the promise can be resolved.

The main thread can become a bottleneck when it has to perform these tasks for a lot of assets, and considering our production asset build could produce over 13,200 assets, it was no wonder we hit this bottleneck. To minify our assets, we would have to think of a different way.

Post-processing

We opted to minify our production assets outside of Webpack, in what we call “post-processing”. We split our production asset build into two stages, a Webpack stage and a post-processing stage. The former is responsible for generating and writing localized assets to disk, and the latter is responsible for performing additional processing on these assets, like minification:

running Webpack with a post-processing stage
diagram of our build process with localization inlining and post-processing

For minification, we use the same terser library the terser-webpack-plugin uses. We also baked parallelization and caching into the post-processing stage, albeit in a different way than the plugin. Where Webpack’s plugin reads the file contents on the main thread and sends the whole contents to the worker threads, our parallel-processing jobs send just the file path to the workers. A worker is then responsible for reading the file, minifying it, and writing it to disk. This reduces memory usage and facilitates more efficient parallel-processing. To implement caching, the Webpack stage passes along the list of assets written by the current build to tell the post-processing stage which files are new. Sentinel assets are also excluded from post-processing because they aren’t served to users.

Splitting our production asset builds into two stages does have a potential downside: our Webpack configuration is now expected to output un-minified text for assets. Consequently, we need to audit any third-party plugins to ensure they do not transform the outputted assets in a format that breaks post-processing. Nevertheless, post-processing is well worth it because it allows us to achieve the fast build times we expect for production asset builds.

Bonus: source maps

We don’t just generate assets in under five minutes on average — we also generate corresponding source maps for all of our assets too. Source maps allow engineers to reconstruct the original source code that went into an asset. They do so by maintaining a mapping of the output lines of a transformation, like minification or Webpack bundling, to the input lines. Maintaining this mapping during the transformation process, though, inherently adds time.

Coincidentally, the same localization characteristics that enable localization inlining also enable faster source map generation. As we saw earlier, the only differences between localized assets are the lines containing localization strings. Subsequently, these lines with localization strings are the only differences between the source maps for these localized assets. For the rest of the lines, the source map for one localized asset is equivalently accurate for another because each line is at the same line number between localized assets.

If we were to generate source maps for each localized asset, we would end up repeating resource-intensive work only to result in nearly identical source maps across locales. Instead, we only generate source maps for the sentinel assets the localized assets are derived from. We then use the sentinel asset’s source map for each localized asset derived from it, and accept that the mapping for the lines with localization strings will be incorrect. This greatly speeds up source map generation because we are able to reuse a single source map that applies to many assets.

For the minification transformation that occurs during post-processing, terser accepts a source map alongside the input to be minified. This input source map allows terser to account for prior transformations when generating source mappings for its minification. As a result, the source map for its minified results still maps back to the original source code before Webpack bundling. In our case, we pass terser the sentinel asset’s source map for each localized asset derived from it. This is only possible because we aren’t using terser-webpack-plugin, which (understandably) doesn’t allow mismatched asset/source map pairings.

diagram of our complete build process with localization inlining, post-processing, and source map optimizations

Through these source map optimizations, we are able to maintain source maps for all assets while only adding ~1.7 minutes to our build time average. Our unique approach can result in up to a 70% speedup in source map generation compared to out-of-the-box options offered by Webpack, a dramatic reduction in the time.

Conclusion

Our journey to achieving fast production builds can be summed up into three principles: reduce, reuse, recycle.

  • Reduce
    Reduce the workload on Webpack’s single thread. This goes beyond applying parallelization plugins and implementing better caching. Investigating our builds led us to discover single-threaded bottlenecks like minification, and after implementing our own parallelized post-processing we observed significantly faster build times.
  • Reuse
    The more existing work our production build can reuse, the less it has to do. Thanks to the convenient circumstances of our production setup, we are able to reuse source maps and apply them to more than one asset each. This avoids a significant amount of unnecessary work when generating source maps, a time-intensive process.
  • Recycle
    When we can’t reuse existing work, figuring out how to recycle it is equally valuable. Deriving localized assets from sentinel assets allows us to recycle the expensive work of producing an asset from an entrypoint, further speeding up builds.

While some implementation details may become obsolete as Webpack and the frontend evolve, these principles will continue to guide us towards faster production builds.

07:36

02:36

Enable remote collaboration with tmate.io on Fedora [Fedora Magazine]

Being able to collaborate on task remotely is an increasing need in today’s world. Contributing to Open Source project ? Working remotely ? tmate is a tmux fork that makes it easy to share a terminal session with others. It can save you hours of lonely debugging or programming.

tmate, being a tmux fork, supports all of tmux features and configuration. Also tmux and tmate can co-exist on the same system. To learn more about tmux, you can read the following article

Installing tmate on Fedora

tmate is available in the Fedora repository, making it really easy to install.

$ sudo dnf install tmate
$ tmate
Connecting to ssh.tmate.io…
 Note: clear your terminal before sharing readonly access
 web session read only: https://tmate.io/t/ro-F2aK7T
 ssh session read only: ssh ro-F2aK7TJsEj6b4T@l.tmate.io
 web session: https://tmate.io/t/H5rPw
 ssh session: ssh H5rPwR@l.tmate.io

After starting tmate, different ways to share your session will be available. You have the choice between ssh (read-only, read-write) or web (read-only, read-write).

The web client is known to have a few issues and is still work in progress, for example the tmux key bindings are not yet supported.

On the host running tmate, you start a new pane by hitting “Ctrl+b, c”. The new pane will then be available with anyone connected to your session.

You can easily keep track of how many clients are connected to your session, using the tmate control pane. To access it hit “Ctrl+b, 0 (zero)” you will then see something like this.

A mate has joined (109.95.145.251) -- 1 client currently connected
A mate has left (109.95.145.251) -- 0 client currently connected
A mate has joined (109.95.145.251) -- 1 client currently connected

To close a session you can simply close tmate “Ctrl+c, Ctrl+d“.

Running your own server

By default tmate is using a remote server hosted on tmate.io. If you prefer you have the possibility to run your own server. For convenience a container image is provided and instruction are available on tmate.io.

It is important to remember that sharing your terminal session in read-write mode will give full access to your system to the connected client. So make sure you trust the persons you sharing you session with or use the read-only mode.

Sunday, 02 February

17:00

Thursday, 30 January

Wednesday, 29 January

17:00

Modernizing Ads Targeting Machine Learning Pipeline [Yelp Engineering and Product Blog]

Yelp’s mission is to connect users with great local businesses. As part of that mission, we provide local businesses with an ads product to help them better reach out to users. This product strives to showcase the most relevant ads to the user without taking away from their overall search experience on Yelp. In this blog post, we’ll walk through the architecture of how this is made possible by using one of the largest machine learning systems at Yelp: Ads Targeting System. The Ads Targeting System is a machine learning (ML) system designed to serve only the most relevant ads...

09:51

HI FRIENDS! I painted a mural in Mexico City!This was such a fun... [Sarah's Scribbles]







HI FRIENDS! I painted a mural in Mexico City!

This was such a fun and unique experience. The mural is designed with a lot of poses in the hopes that people will try to pose alongside the character. I would love to see your photos next to the mural!

Thank you to Pictoline who brought me over and helped me paint! <3

The mural address is Calle Mérida esquina con Tabasco Colonia Roma Norte, CDMX.

06:00

JAMstack at the Edge: How we built Built with Workers… on Workers [The Cloudflare Blog]

JAMstack at the Edge: How we built Built with Workers… on Workers

I'm extremely stoked to announce Built with Workers today – it's an awesome resource for exploring what you can build with Cloudflare Workers. As Adam explained in our launch post, showcasing developers building incredible projects with tools like Workers KV or our streaming HTML rewriter is a great way to celebrate users of our platform. It also helps encourage developers to try building their dream app on top of Workers. In this post, I’ll explore some of the architectural and implementation designs we made while building the site.

When we first started planning Built with Workers, we wanted to use the site as an opportunity to build a new greenfield application, showcasing the strength of the Workers platform. The Workers Developer Experience team is cross-functional: while we might spend most of our time improving our docs, or developing features for our command-line interface Wrangler, most of us have spent years developing on the web. The prospect of starting a new application is always fun, but in this instance, it was a prime chance to ask (and answer) the question, "If I could build this site on Workers with whatever tools I want, what would I choose?"

A guiding principle for the Workers platform is ease-of-use. The programming model is simple: it's just JavaScript (or, via WASM, Rust, C, and C++), and you have complete control over the requests coming in and the requests going out from your Workers script. In the same way, while building Built with Workers, it was crucial to find a set of tools that could enable something like this throughout the process of building the entire application. To enable this, we've embraced JAMstack – a software stack comprised of JavaScript, APIs, and markup – with Built with Workers, deploying always up-to-date static builds of the site directly to the edge, using Workers Sites. Our framework of choice, Gatsby.js, provides a set of sane defaults to build a modern web application. To manage content and the layout of the site, we've chosen Sanity.io, a powerful headless CMS that allows us to model the entire website without needing to deploy any databases or spin up any additional infrastructure.

Personally, I'm excited about JAMstack as a methodology for building web applications because of this emphasis on reducing infrastructure: it's incredibly similar to the motivations behind deploying serverless applications using Cloudflare Workers, and as we developed Built with Workers, we discovered a number of these philosophical similarities in JAMstack and Cloudflare Workers – exciting! To help encourage developers to explore building their own JAMstack applications on Workers, I'm also announcing today that we've made the Built with Workers codebase open-source on GitHub – you can check out how the application is developed, built and deployed from start to finish.

In this post, we'll dig into Built with Workers, exploring how it works, the technical decisions we've made, and some of the most fascinating aspects of what it means to build applications on the edge.

JAMstack at the Edge: How we built Built with Workers… on Workers
A screenshot of the Built with Workers homepage

Exploring the JAMstack

My first encounter with tooling that would ultimately become part of "JAMstack" was in 2013. I noticed the huge proliferation of developers building personal "static" sites – taking blog posts written primarily in Markdown, and pushing them through frameworks like Jekyll to build full websites that could easily be deployed to a number of CDNs and file hosting platforms. These static sites were fast – they are just HTML, CSS, and JavaScript – and easy to update. The average developer spends their days maintaining large and complex software systems, so it was relaxing to just write Markdown, plug in some re-usable HTML and CSS, and deploy your website. The advent of static sites, of course, isn't new – but after years of increasingly complex full-stack technology, the return to simplicity was a promising development for many kinds of websites that didn't need databases, or any dynamic content.

In the last couple years, JAMstack has built upon that resurgence, and represents an approach to building complete, complex applications using the same tooling that has become the first choice for developers building their simple personal sites. JAMstack is comprised of three conceptual pieces – JavaScript, APIs, and Markup – each of which is a crucial aspect of simplifying our web applications and making them easy to write, build, and deploy.

J is for JavaScript

The JAMstack architecture relies heavily on the ubiquity of JavaScript as the language of the web. Many modern web applications use powerful, dynamic front-end frameworks like React and Vue to render user interfaces and process state on the client for users. On the backend, or in Workers' case, on the edge, any dynamic functionality in your JAMstack application should be built on top of JavaScript, often working in the request-response model that full-stack developers are accustomed to.

The Workers platform is perfectly suited to this requirement! As a developer building on Workers, you have total control of incoming requests and outgoing responses, using the JavaScript Service Worker APIs you know and love. We built Workers Sites as an extension of the Workers platform (and Workers KV as a storage mechanism at the edge), making it possible to deploy your site assets using a single command in Wrangler: wrangler publish.

When your Workers Site receives a new request, we'll execute JavaScript at the edge to look up a piece of content from Workers KV, and serve it back to the client at lightning speed. Remarkably, you can deploy JAMstack applications on Workers with no additional configuration besides generating your Workers Siteby design, Workers Sites is built to serve as an exceptional JAMstack deployment platform.

A is for APIs

The advent of static site tooling for personal sites makes sense: your site is a few pages: a few blog posts, for instance, and the classic "About" or "Contact" page. When it's compiled to HTML, the footprint is quite small! This small footprint is what makes static sites easy to reason about: they're trivial to host in terms of bandwidth and storage costs, and they rarely change, so they're easily cacheable.

While that works for personal sites, complex applications actually have data requirements! We need to talk to the user data in our databases, and analytics information in our data warehouses. JAMstack apps tackle this by definitively stating that these data sources should be accessible via HTTPS APIs, consumable by the application as a way to provide dynamic information to clients.

Workers is a fascinating platform in regards to JAMstack APIs. It can serve as a gateway to your data, or as a place to persist and return data itself. I can, for instance, expose an API endpoint via my Workers script without giving clients access to my origin. I can also use tooling like Workers KV to persist data directly on the edge, and when a user requests that data, I can resolve the data by returning JSON directly from my application.

This flexibility has been an unexpected part of the experience of developing Built with Workers. In a later section of this post, I'll talk about how we developed an integral feature of the site based on the unique strengths of Workers as a way to host static assets and as a dynamic JavaScript execution platform. This has remarkable implications that blur the lines between classic static sites and dynamic applications, and I'm really excited about it.

M is for Markup

A breakthrough moment in my understanding of JAMstack came at the beginning of this year. I was working on a job board for frontend developers, using the static site framework Gatsby.js and Sanity.io, a headless CMS tool that allows developers to model content without maintaining a database or any infrastructure. (As a reminder – this set of tools is identical to what we ultimately used to develop Built with Workers. It's a very good stack!)

SEO is crucial to a job board, and as I began to explore how to drive more traffic to my job board, I landed on the idea of generating a huge amount of search-oriented content automatically from the job data I already had. For instance, if I had job posts with the keywords "React", "Europe", and "Senior" (as in "senior developer"), what if I created pages with titles like "Senior React developer jobs in Europe", or "Remote Angular jobs"? This approach would allow the site to begin ranking for a variety of job positions, locations, and experience levels, and as more jobs were posted on the site, each of these pages would be enriched with more useful information and relevant content, helping it rank higher in search.

"But static sites are... static!", I told myself. Would I need to build an entire dynamic API on top of my static site, just to be able to serve these search-engine optimized pages? This led me to a "eureka" moment with Gatsby – I could define markup (the "M" in JAMstack), and when I'm building my site, I could look at all the available job data I had, cycling through every available keyword combination and inserting it into my markup to generate thousands of these pages. As I later learned, this idea is not necessarily unique to Gatsby – it is possible, for instance, to automate getting data from your API and writing it to data files in earlier static site frameworks like Hugo – but it is a first-class citizen in Gatsby. There are a ton of data sources available via Gatsby plugins, and because they're all exposed via HTTPS, the workflow is standardized inside of the framework.

In Built with Workers, we connect to the Sanity.io CMS instance at build-time: crucially, by the time that the site has been deployed to Workers, the application effectively has no idea what Sanity even is! Our Gatsby application connects to Sanity.io via an HTTPS API, and using GraphQL, we look at all the data that we have in our CMS, and make decisions about what pages to generate and how to render the site's interface, ultimately resulting in a statically-built application that is derived from dynamic data.

This emphasis on the build step in JAMstack is quite different than the classic web application. In the past, a user requested data, a web server looked at what the user was requesting, and then the user waits, as the server gets that data, returns JSON, or interpolates it into templates written in tools like Pug or ERB. With JAMstack, the pages are already built, and the deployed application is just a collection of plain HTML, CSS, and JavaScript.

Why Cloudflare Workers?

Cloudflare's network is a fascinating place to deploy JAMstack applications. Yes, Cloudflare's edge network can act as a CDN for your static assets, like your CSS stylesheets, or your client-side JavaScript code. But with Workers, we now have the ability to run JavaScript side-by-side with our static assets. In most JAMstack applications, the CDN is simply a bucket where your application ends up. Usually, the CDN is the most boring part of the stack! With Cloudflare Workers, we don't just have a CDN: we also have access to an extremely low-latency, fully-featured JavaScript runtime.

The implications of this on the standard JAMstack workflow are, frankly, mind-boggling, and as part of developing Built with Workers, we've been exploring what it means to have this runtime available side-by-side with our statically-built JAMstack application.

To demonstrate this, we’ve implemented a bookmarking feature, which allows users of Built with Workers to bookmark projects. If you look at a project's usage of our streaming HTML rewriter and say "Wow, that's cool!", you can also bookmark for the project to show your support. This feature, rendered as a button tag is deceptively simple: it's a single piece of the user interface that makes use of the entirety of the Workers platform, to provide user-specific dynamic functionality. We'll explore this in greater detail later in the post – see "Enhancing static sites at the edge".

A modern development and content workflow

In the announcement post for Workers Sites, Rita outlined the motivations behind launching Workers Sites as a modern way to deploy sites:

"Born on the edge, Workers Sites is what we think modern development on the web should look like, natively secure, fast, and massively scalable. Less of your time is spent on configuration, and more of your time is spent on your code, and content itself."

A few months later, I can say definitively that Workers Sites has enabled us to develop Built with Workers and spend almost no time on configuration. Using our GitHub Action for deploying Workers applications with Wrangler, the site has been continuously deploying to a staging environment for the past couple weeks. The simplicity around this continuous deployment workflow has allowed us to focus on the important aspects of the project: development and content.

The static site framework ecosystem is fairly competitive, but as we considered our options for this site, I advocated strongly for Gatsby.js. It's an incredible tool for building JAMstack applications, with a great set of default for performant applications. It's common to see Gatsby sites with Lighthouse scores in the upper 90s, and the decision to use React for implementing the UI makes it straightforward to onboard new developers if they're familiar with React.

As I mentioned in a previous section, Gatsby shines at build-time. Gatsby's APIs for creating pages during the build process based on API data are incredibly powerful, allowing developers to concretely define every statically-generated page on their web application, as well as any relevant data that needs to be passed in.

With Gatsby decided upon as our static site framework, we needed to evaluate where our content would live. Built with Workers has two primary data models, used to generate the UI:

  • Projects: websites, applications, and APIs created by developers using Cloudflare Workers. For instance, Built with Workers!
  • Features: features available on the Workers platform used to build applications. For instance, Workers KV, or our streaming HTML rewriter/parser.

Given these requirements, there were a number of potential approaches to take to store this data, and make it accessible. Keeping in line with JAMstack, we know that we probably should expose it via an HTTPS API, but from where? In what format?

As a full-stack developer who's comfortable with databases, it's easy to envision a world where we spin up a PostgreSQL instance, write a REST API, and write all kinds of fetch('/api/projects') to get the information we need. This method works, but we can do better! In the same way we built Workers Sites to simplify the deployment process, it was worthwhile to explore the JAMstack ecosystem and see what solutions exist for modeling data without being on the hook for more infrastructure.

Of the different tools in the ecosystem – databases, whether SQL or NoSQL, key-value stores (such as our own, Workers KV), etc. – the growth of "headless CMS" tools has made the largest impact on my development workflow.

On CSS Tricks, Chris Coyier wrote about the rise of headless CMS tools back in March 2016, and summarizes their function well:

[Headless CMSes are] very related to The Big Conversation™ on the web the last many years. How are we going to handle bringing Our Stuff™ all these different devices/screens/inputs.
Responsive design says "let's let our design and media accommodate as much variation in screens as possible."
Progressive enhancement says "let's make the functionality of this site work no matter what."
Designing for accessibility says "let's ensure everyone can use this regardless of their capabilities as a person."
A headless CMS says "let's not tie our data to any one way of doing things."

Using our headless CMS, Sanity.io, we can get every project inside our dataset, and call Gatsby's createPage function to create a new page for each project, using a pre-defined project template file:

// gatsby-node.js

exports.createPages = async ({ graphql, actions }) => {
  const { createPage } = actions;

  const result = await graphql(`
    {
      allSanityProject {
        edges {
          node {
            slug
          }
        }
      }
    }
  `);

  if (result.errors) {
    throw result.errors;
  }

  const {
    data: { allSanityProject }
  } = result;

  const projects = allSanityProject.edges.map(({ node }) => node);
  projects.forEach((node, _index) => {
    const path = `/built-with/projects/${node.slug}`;

    createPage({
      path,
      component: require.resolve("./src/templates/project.js"),
      context: { slug: node.slug }
    });
  });
};

Using Sanity to drive the content for Built with Workers has been a huge win for our team. We're no longer constrained by code deploys to make changes to content on the site – we don't need to make a pull request to create a new project, and edits to a project's name or description aren't constrained by someone with the ability to deploy the project. Instead, we can empower members of our team to log in directly to the CMS and make changes, and be confident that once the corresponding deploy has completed (see "The CDN is the deployment platform" below), their changes will be live on the site.

Dynamic JAMstack layouts

As our team got up and running with Sanity.io, we found that the flexibility of a headless CMS platform was useful not just for creating our original data requirements – projects and features – but in rethinking and innovating on how we actually format the application itself.

With our previous objective of empowering non-technical folks to make changes to the site without deploying any code in consideration, we've also taken the entire homepage of Built with Workers and defined it as an instance of the "layout" data model in Sanity.io. By doing this, we can define corresponding "collections", which are sets of projects. When a layout has many collections defined inside of the CMS, we can rapidly re-order, re-arrange, and experiment with new collections on the homepage, seeing the updated version of the site reflected immediately, and live on the production site within only a few minutes, after our continuous deployment process has finished.

JAMstack at the Edge: How we built Built with Workers… on Workers
Updating the layout of Built with Workers live from Sanity's studio

With this work implemented, it's easy to envision a world where our React code is purely concerned with rendering each individual aspect of the application's interface – for instance, the project title component, or the "card" for an individual project – and the CMS drives the entire layout of the site. In the future, I'd like to continue exploring this idea in other pages in Built with Workers, including the project pages and any other future content we put on the site.

Enhancing static sites at the edge

Much of what we've discussed so far can be thought of as features and workflows that have great DX (developer experience), but not specific to Workers. Gatsby and Sanity.io are great, and although Workers Sites is a great platform for deploying JAMstack applications due to the Workers platform's low-latency and performance characteristics, you could deploy the site to a number of different providers with no real differentiating features.

As we began building a JAMstack application on top of Built with Workers, we also wanted to explore how the Workers platform could allow developers to combine the simplicity of static site deployments with the dynamism of having a JavaScript runtime immediately available.

In particular, our recently-released streaming HTML rewriter seems like a perfect fit for "enhancing" our static sites. Our application is being served by Workers Sites, which itself is a Workers template that can be customized. By passing each HTML page through the HTML rewriter on its way to the client, we had an opportunity to customize the content without any negative performance implications.

As I mentioned previously, we landed on a first exploration of this platform advantage via the "bookmark" button. Users of Built with Workers can "bookmark" for a project – this action sends a request back up to the Workers application, storing the bookmark data as JSON in Workers KV.

// User-specific data stored in Workers KV, representing
// per-project bookmark information

{
  "bytesized_scraper_bookmarked": false,
  "web_scraper_bookmarked": true
}

When a user returns to Built with Workers, we can make a request to Workers KV, looking for corresponding data for that user and the project they're currently viewing. If that data exists, we can embed the "edge state" directly into the HTML using the streaming HTML rewriter.

// workers-site/index.js

import { getAssetFromKV } from "@cloudflare/kv-asset-handler"

addEventListener("fetch", event => { 
  event.respondWith(handleEvent(event)) 
})

class EdgeStateEmbed {
  constructor(state) {
    this._state = state
  }
  
  element(element) {
    const edgeStateElement = `
      <script id='edge_state' type='application/json'>
        ${JSON.stringify(this._state)}
      </script>
    `
    element.prepend(edgeStateElement, { html: true })
  }
}

const hydrateEdgeState = async ({ state, response }) => {
  const rewriter = new HTMLRewriter().on(
    "body",
    new EdgeStateEmbed(await state)
  )
  return rewriter.transform(await response)
}

async function handleEvent(event) {
  return hydrateEdgeState({
    response: getAssetFromKV(event, options),
    // Get associated state for a request, based on the user and URL
    state: transformBookmark(event.request),
  })
}

When the React application is rendered on the client, it can then check for that embedded edge state, which influences how the "bookmark" icon is rendered - either as "bookmarked", or "bookmarked". To support this, we've leaned on React's useContext, which allows any component inside of the application component tree to pull out the edge state and use it inside of the component:

// edge_state.js

import React from "react"
import { useSSR } from "../utils"

const parseDocumentState = () => {
  const edgeStateElement = document.querySelector("#edge_state")
  return edgeStateElement ? JSON.parse(edgeStateElement.innerText) : {}
}

export const EdgeStateContext = React.createContext([{}, () => {}])
export const EdgeStateProvider = ({ children }) => {
  const { isBrowser } = useSSR()
  if (!isBrowser) {
    return <>{children}</>
  }
  
  const edgeState = parseDocumentState()
  const [state, setState] = React.useState(edgeState)
  const updateState = (newState, options = { immutable: true }) => options.immutable
    ? setState(Object.assign({}, state, newState))
    : setState(newState)
  
  return (
    <EdgeStateContext.Provider value={[state, updateState]}>
      {children}
    </EdgeStateContext.Provider>
  )
}

// Inside of a React component
const Bookmark = ({ bookmarked, project, setBookmarked, setLoaded }) => {
const [state, setState] = React.useContext(EdgeStateContext)
// `bookmarked` value is a simplification of actual code
return <BookmarkButton bookmarked={state[project.id]} />
}

The combination of a straightforward JAMstack deployment platform with dynamic key-value object storage and a streaming HTML rewriter is really, really cool. This is an initial exploration into what I consider to be a platform-defining feature, and if you're interested in this stuff and want to continue to explore how this will influence how we write web applications, get in touch with me on Twitter!

The CDN is the deployment platform

While it doesn't appear in the acronym, an unsung hero of the JAMstack architecture is deployment. In my local terminal, when I run gatsby build inside of the Built with Workers project, the result is a folder of static HTML, CSS, and JavaScript. It should be easy to deploy!

The recent release of GitHub Actions has proven to be a great companion to building JAMstack applications with Cloudflare Workers – we've open-sourced our own wrangler-action, which allows developers to build their Workers applications and deploy directly from GitHub.

The standard workflows in the continuous deployment world – deploy every hour, deploy on new changes to the master branch, etc – are possible and already being used by many developers who make use of our wrangler-action workflow in their projects. Particular to JAMstack and to headless CMS tools is the idea of "build-on-change": namely, when someone publishes a change in Sanity.io, we want to do a new deploy of the site to immediately reflect our new content in production.

The versatility of Workers as a place to deploy JavaScript code comes to the rescue, again! By telling Sanity.io to make a GET request to a deployed Workers webhook, we can trigger a repository_event on GitHub Actions for our repository, allowing new deploys to happen immediately after a change is detected in the CMS:

const headers = {
  Accept: 'application/vnd.github.everest-preview+json',
  Authorization: 'Bearer $token',
}

const body = JSON.stringify({ event_type: 'repository_dispatch' })

const url = `https://api.github.com/repos/cloudflare/built-with-workers/dispatches`

const handleRequest = async evt => {
  await fetch(url, { method: 'POST', headers, body })
  return new Response('OK')
}

addEventListener('fetch', handleRequest)

In doing this, we've made it possible to completely abstract away every deployment task around the Built with Workers project. Not only does the site deploy on a schedule, and on new commits to master, but it can also do additional deploys as the content changes, so that the site is always reflective of the current content in our CMS.

JAMstack at the Edge: How we built Built with Workers… on Workers
The GitHub Actions deployment workflow for Built with Workers

Conclusion

We're super excited about Built with Workers, not only because it will serve as a great place to showcase the incredible things people are building with the Cloudflare Workers platform, but because it also has allowed us to explore what the future of web development may look like. I've been advocating for what I've seen referred to as "full-stack serverless" development throughout 2019, and I couldn't be happier to start 2020 with launching a project like Built with Workers. The full-stack serverless stack feels like the future, and it's actually fun to build with on a daily basis!

If you're building something awesome with Cloudflare Workers, we're looking for submissions to the site! Get in touch with us via this form – we're excited to speak with you!

Finally, if topics like JAMstack on Cloudflare Workers, "edge state" and dynamic static site hydration, or continuous deployment interest you, the Built with Workers repository is open-source! Check it out, and if you're inspired to build something cool with Workers after checking out the code, make sure to let us know!

Announcing Built with Workers [The Cloudflare Blog]

Announcing Built with Workers

Ever since its initial release, Cloudflare Workers has given JavaScript developers a platform to enable building high-performance applications with automatic scaling.

As with any new technology, we know it can be a bit intimidating to get started. For one thing, running code on the edge is a paradigm shift—forcing us to rethink classic web architecture problems, or removing them altogether. For another, since you can build just about anything, it can be challenging to figure out what to build first.


Today we’re launching Built with Workers, a new site designed to help get those creative juices flowing and unblock you, by answering that simple but important question: What can I build with Cloudflare Workers?

Announcing Built with Workers

Some time in 1999, at age 11, I received my first graphing calculator. It was a TI-82 that my older sister no longer needed. It was on this very calculator that I learned to write code. Looking back, I’m not sure how exactly I had the patience or sanity to figure it all out.

It was a mess. Among the many difficulties were that I had to type the code out on the calculator’s non-QWERTY keyboard, the language I was writing in didn’t have functions, and oh yeah, the text editor would frequently bug out and I’d inexplicably lose half or all of my code.

But what was perhaps more challenging than all of that, was that I had absolutely zero code examples to draw inspiration from.

I remember when I stumbled on a design pattern to handle input. It was quite the eureka moment. With only labels and gotos, I would have to check if each key was pressed and then loop back around to do it all over again. Little did I know I’d be programming games just about the same way today using  requestAnimationFrame .

Though I was able to make a few simple programs hunting around like this, I quickly hit my limits and stopped writing them.

A couple of years later, a friend of mine, with his fancy-pantsed TI-83 Silver Edition calculator with 4× the RAM of mine—I was a bit jealous—showed me a program that came with his fancy new calculator.

It was called Phoenix. If you ever played any graphing calculator games, it was probably this one. It was fast and action packed, flying a spaceship around shooting enemies. It was beautiful.

Seeing this game totally changed my perspective on the platform. I went on to create a couple of games involving similar mechanics and a similar animation style, all because I’d seen this game on a friend’s calculator.

It opened up my eyes to what was possible, gave me the confidence to try things I previously thought were impossible, and brought out the detective inside me to want to figure out how they were able to build each piece of functionality.

A few other friends of mine started writing programs too. We would trade our programs in the back of math class together using a cable to attach the two calculators together.

Importantly, when you’d receive a program from another friend’s calculator, you could run it and you could view and manipulate the source code. This allowed us to collaborate on games, by passing them back and forth.

Our apps and games became more complex and interesting. Non-coder friends of ours were becoming interested in our projects too, and we started sharing games at lunch. We would get great feedback from them, leading us to fix bugs, build more stylish graphics and intro sequences, and streamline the player experience.

I’m sure our teachers loved us...


A few years later, I got the Internet.

What made the Internet so exciting to me was that by design, the source code of any website was just sitting right there, one keyboard command away. Just like the calculators.

So I did what many of us did back then—and still do today: if I saw something cool, I stole it. Oh nice hover animation: I’ll take that thank you very much. Cool button design: yup, that’s mine now.

Back then we called them websites not apps, and ourselves web designers not frontend engineers, but in all of the time since then, the web platform hasn’t really changed.

Although the web today is more complex, often with many more layers of abstraction, it’s still the case that if you can see something in your browser, most likely you can quickly access, study, copy, manipulate, borrow and steal the source code that created it in just a few seconds.

This is why I’m personally so excited to bring Built with Workers to the community, and to learn from the projects on it myself.

Every project page has a section in which the creators get to describe how their project uses Cloudflare Workers. Many projects use Workers KV, our distributed key-value store, and Workers Sites, our edge-based static site hosting, and some projects are entirely written, built, tested, and deployed with Cloudflare Workers.

Even better than that, many projects on Built with Workers are open-source, featuring a direct link to the Github repo, allowing you to quickly get at the source. The Built with Workers site itself is one of these open-source projects on the Built with Workers site.

We hope that the projects on Built with Workers will help inspire you to build your next project, by seeing just what’s possible.

Visit Built with Workers

It’s been thrilling to see the incredible projects people are building. If you’ve got a project you’d like to share, please fill out this form.

05:10

4 cool new projects to try in COPR for January 2020 [Fedora Magazine]

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the COPR User Documentation for how to get started.

Contrast

Contrast is a small app used for checking contrast between two colors and to determine if it meets the requirements specified in WCAG. The colors can be selected either using their RGB hex codes or with a color picker tool. In addition to showing the contrast ratio, Contrast displays a short text on a background in selected colors to demonstrate comparison.

Installation instructions

The repo currently provides contrast for Fedora 31 and Rawhide. To install Contrast, use these commands:

sudo dnf copr enable atim/contrast
sudo dnf install contrast

Pamixer

Pamixer is a command-line tool for adjusting and monitoring volume levels of sound devices using PulseAudio. You can display the current volume of a device and either set it directly or increase/decrease it, or (un)mute it. Pamixer can list all sources and sinks.

Installation instructions

The repo currently provides Pamixer for Fedora 31 and Rawhide. To install Pamixer, use these commands:

sudo dnf copr enable opuk/pamixer
sudo dnf install pamixer

PhotoFlare

PhotoFlare is an image editor. It has a simple and well-arranged user interface, where most of the features are available in the toolbars. PhotoFlare provides features such as various color adjustments, image transformations, filters, brushes and automatic cropping, although it doesn’t support working with layers. Also, PhotoFlare can edit pictures in batches, applying the same filters and transformations on all pictures and storing the results in a specified directory.

Installation instructions

The repo currently provides PhotoFlare for Fedora 31. To install Photoflare, use these commands:

sudo dnf copr enable adriend/photoflare
sudo dnf install photoflare

Tdiff

Tdiff is a command-line tool for comparing two file trees. In addition to showing that some files or directories exist in one tree only, tdiff shows differences in file sizes, types and contents, owner user and group ids, permissions, modification time and more.

Installation instructions

The repo currently provides tdiff for Fedora 29-31 and Rawhide, EPEL 6-8 and other distributions. To install tdiff, use these commands:

sudo dnf copr enable fif/tdiff  
sudo dnf install tdiff

Tuesday, 28 January

01:00

Empowering Your Privacy [The Cloudflare Blog]

Empowering Your Privacy
Empowering Your Privacy

Happy Data Privacy Day! At Cloudflare, our mission is to help build a better Internet, and we believe data privacy is core to that mission. But we know words are cheap — even data brokers who sell your personal information will tell you that “privacy is important” to them. So we wanted to take the opportunity on this Data Privacy Day to show you how our commitment to privacy crosses all levels of the work we do at Cloudflare to help make the Internet more private and secure — and therefore better — for everyone.

Privacy on the Internet means different things to different people. Maybe privacy means you get to control your personal data — who can collect it and how it can be used. Or that you have the right to access and delete your personal information. Or maybe it means your online life is protected from government surveillance or from ad trackers and targeted advertising. Maybe you think you should be able to be completely anonymous online. At Cloudflare, we think all these flavors of privacy are equally important, and as we describe in more detail below, we’ve taken steps to address each of these privacy priorities.

Governments don’t necessarily take the same view on what privacy should mean either. Europe has its General Data Protection Regulation (GDPR), under which people have the right to control how their information is used, and the protection of data is a fundamental right under the EU Charter of Fundamental Rights. The United States takes a consumer-centric approach focusing on deceptive use of information, the sale of information, and privacy from unwarranted government surveillance. Brazil’s privacy law is similar to that of Europe’s, and Canada, New Zealand, Japan, Australia, China, and Singapore (to name a few) have some variation on the theme of a national, comprehensive privacy law.

Rather than viewing privacy of personal data as an ocean of data to be regulated through the lens of any particular government, we think privacy merits a different approach. To begin with, we don’t think there should be an ocean of personal data. We believe in empowering individuals and entities of all sizes with technological tools to reduce the amount of personal data that gets funneled into the data ocean — regardless of whether you live in a country with laws protecting the privacy of your personal data. If we can build tools to help you share less personal data online, then that’s a win for privacy no matter your privacy priorities or country of residence.

Technologies that Enable the Privacy of Personal Data

We’ve said it before — the Internet was not built with privacy and security in mind. But as the Internet has become more essential to daily life and more central to even the most critical corporate and government systems, the world has needed better tools to provide privacy and security for these online functions. When we talk about building a better Internet, for us that means (re)building the Internet with privacy baked in. Since Cloudflare launched in 2010, we’ve released a number of state-of-the-art, privacy-enhancing technologies that can help individuals, businesses, and governments alike:

  • Universal SSL: In 2014, there were 2 million websites that supported encrypted connections. In September of that year we introduced universal SSL (now called Transport Layer Security) for all of our customers, paying and free, and overnight we were able to make SSL easily available at scale to the millions of websites that use Cloudflare. Supporting SSL means that we support encrypting the content of web pages, which had previously been sent as plain text over the Internet. It’s like sending your private, personal information in a locked box instead of on a postcard.
  • Privacy Pass: Cloudflare supports Privacy Pass, which lets users prove their identity across multiple sites anonymously without enabling tracking. When people use anonymity services or shared IPs, it makes it more difficult for website protection services like Cloudflare to identify their requests as coming from legitimate users and not bots. To help reduce the friction for these users — which include some of the most vulnerable users online — Privacy Pass provides them with a way to prove they are legitimate across multiple sites on the Cloudflare network. This is done without revealing their identity, and without exposing Cloudflare customers to additional threats from malicious bots.
  • ESNI: We announced beta support for encrypted Server Name Identification (ESNI) in 2018. Server Name Identification (SNI) was created to allow multiple websites to exist on the same IP address (something that became necessary with the shortage of IPv4 addresses), but it can reveal which websites users are visiting. As described here, ESNI encrypts the SNI, fixing what has been a glaring privacy hole.
  • 1.1.1.1 Public DNS Resolver: In 2018, we announced our public privacy-focused resolver, the 1.1.1.1 Public DNS Resolver (which also turned out to be the world’s fastest public DNS resolver). It was our first consumer product, it’s free, and we built it because we believe that consumers should have the ability to browse the Internet without providers in the middle monitoring user activity. So our public DNS resolver service will never store 1.1.1.1 public DNS resolver users’ IP addresses (referred to as the source IP address) in non-volatile storage, and we anonymize the source IP addresses of 1.1.1.1 public DNS resolver users before logging any data. This way, we have no information about what website a specific user has looked up using the 1.1.1.1 Public DNS Resolver service. We can’t tell who is visiting any given website, and we don’t want to know.
  • DNS over HTTPS (DoH): Using the 1.1.1.1 Public DNS Resolver means that your ISP won’t get all of your browsing data from acting as your DNS resolver, but they will still get it from provisioning those requests unless you encrypt that channel. For those reasons, we added support for DoH. DNS requests can contain some alarmingly personal data, such as your location, the domains and subdomains you have visited, the time of day requests were submitted, and how long you stayed on certain sites. Encrypting those requests ensures that only the user and the resolver get that information, and that no one involved in the transit in between sees it. In addition to DoH, we’ve partnered with Mozilla to support private web browsing in Firefox. We have also employed query minimization to ensure that those who don't need to access the full URL you are requesting, simply don’t.
  • 1.1.1.1 Mobile Application with WARP: People are accessing the Internet from their mobile devices more and more, so in 2019 we launched our 1.1.1.1 Mobile Application with WARP. You can enable our mobile application in DNS-only mode to ensure that all of your mobile device's DNS queries are sent to our 1.1.1.1 Public DNS Resolver using either DNS over HTTPS or DNS over TLS. You can also enable WARP in our mobile application, which includes everything from our DNS-only mode and will also route traffic from your device through the Cloudflare network via encrypted tunnels. This means that even if you are accessing websites or mobile applications that are not using HTTPS, the content transmitted to and from your device will be encrypted if you have WARP enabled and will not be sent as plain text over the Internet.  

How We Do Privacy at Cloudflare

The privacy-enhancing technologies we build are public examples of how we put our money where our mouth is when it comes to privacy. We also want to tell you about the ways — some public, some not — we infuse privacy principles at all levels at Cloudflare.

  • Employee Education and Mindset: An understanding of privacy is core to a Cloudflare employee’s experience right from the start. Employees learn about the role privacy and security play in helping to build a better Internet in their first week at Cloudflare. During the comprehensive employee orientation, we stress the role each employee plays in keeping the company and our customers secure. All employees are required to take annual data protection training, which introduces employees to the fundamentals of the Fair Information Practices (FIPs), GDPR and other applicable laws, and we do targeted training for individual teams, depending on their engagement with personal data, throughout the year.
  • Privacy in Product Development: We have built the FIPs and GDPR requirements into product development. Cloudflare employees take privacy-by-design seriously. We develop products and processes with the principles of data minimization, purpose limitation, and data security always front of mind. We have a product development lifecycle that includes performing privacy impact assessments when we may process personal data. We retain personal data we process for as short a time as necessary to provide our services to our customers. We do not cross-track individual Internet users across sites. We don’t sell personal information. We don’t monetize DNS requests. We detect, deter, and deflect bad actors — we’re not in the business of looking at what any one person (or more specifically, browser) is doing when they browse the Internet. That’s not what we’re about.
  • Internal Compliance with Privacy Regulations: Even before Europe’s watershed GDPR went into effect in 2018 and the California Consumer Privacy Act (CCPA) took effect earlier this month, we were focusing on how to implement the privacy principles embodied in regulations globally. A key part of this has been to minimize our collection of personal data and to only use personal data for the purpose for which it was collected. We view the GDPR and CCPA as a codification of many of the steps we were already taking: only collect the personal data you need to provide the service you’re offering; don’t sell personal information; give people the ability to access, correct, or delete their personal information; and give our customers control over the information that, for example, is cached on our content delivery network (CDN), stored in Workers Key Value Store, or captured by our web application firewall (WAF).
  • Security as a Means to Enhance Privacy: We’re a security company, so naturally we view security as a critical element of ensuring data privacy. In addition to the extensive internal security mechanisms we have in place to protect our customers’ data, we also have become certified under industry standards to demonstrate our commitment to data security. We are ISO 27001 and AICPA SOC 2 Type II certified. Cloudflare's SOC 2 Type II report covers security, confidentiality, and availability controls to protect customer data. We also maintain a SOC 3 report which is the public report of Security, Confidentiality, and Availability controls. In addition to this, we comply with our obligations under the EU Directive on Security of Network and Information Systems (NIS).
  • Privacy-focused Response to Government and Third-Party Requests for Information: Our respect for our customers' privacy applies with equal force to commercial requests and to government or law enforcement requests. Any law enforcement requests that we receive must strictly adhere to the due process of law and be subject to judicial oversight. We believe that U.S. law enforcement requests for the personal data of a non-U.S. person that conflict with the privacy laws of that person’s country of residence (such as the EU GDPR) should be legally challenged. Consistent with both the U.S. CLOUD Act and the proceedings in the Microsoft Ireland case,  providers like Cloudflare may ask U.S. courts to quash requests from U.S. law enforcement based on such a conflict. In addition, it is our policy to notify our customers of a subpoena or other legal process requesting their customer or billing information before disclosure of that information, whether the legal process comes from the government or private parties involved in civil litigation, unless legally prohibited. We also publicly report on the types of requests we receive, as well as our responses, in our semi-annual  Transparency Report. Finally, we publicly list certain types of actions that Cloudflare has never taken in response to government requests, and we commit that if Cloudflare were asked to do any of the things on this list, we would exhaust all legal remedies in order to protect our customers from what we believe are illegal or unconstitutional requests.
  • Bringing Privacy and Security to Vulnerable Entities (Project Galileo): Since 2014, we have been providing a wide range of security products to important, yet vulnerable, voices on the internet with Project Galileo. Privacy is essential to the more than 900 organizations receiving free services under the Project, as many face threats from powerful adversaries. These organizations range from humanitarian groups and non-profit organizations, to journalism and media sites that are repeatedly flooded with malicious attacks in an attempt to knock them offline.
  • Spreading the Message on What We Think Privacy Should Look Like: It isn’t enough to build tools with privacy in mind; we also feel a responsibility to share best practices we have learned and work with policymakers to help them understand the implications of regulation on complex technologies. For example, Cloudflare has actively supported efforts to develop a framework for US Federal privacy standards, urging policymakers to adopt technology-neutral approaches that allow standards to change and improve as technology does. In Europe, we are engaged in the ongoing discussions on the draft ePrivacy Regulation, which aims to enshrine the important principle of confidentiality of communications and guides companies on cookie usage and direct marketing. We are also actively contributing to the EU debate on the draft eEvidence Regulation, which seeks to facilitate cross-border access to data. We believe this initiative must fully respect the EU Charter of Fundamental Rights and the EU data protection framework.

So What’s Next?

Protecting the privacy of personal data is an ongoing journey. Our approach has never been to check the boxes of compliance and move on. We are continually evaluating how we handle personal data and looking for ways to minimize the amount of personal data we receive. We will continue to be self-critical and examine our own motivations for the technologies we develop. And we will keep working, just as we have for the past ten years, to find new ways to secure privacy and security for our customers and for the Internet as a whole.

Monday, 27 January

09:48

JavaScript Libraries Are Almost Never Updated Once Installed [The Cloudflare Blog]

JavaScript Libraries Are Almost Never Updated Once Installed

Cloudflare helps run CDNJS, a very popular way of including JavaScript and other frontend resources on web pages. With the CDNJS team’s permission we collect anonymized and aggregated data from CDNJS requests which we use to understand how people build on the Internet. Our analysis today is focused on one question: once installed on a site, do JavaScript libraries ever get updated?

Let’s consider jQuery, the most popular JavaScript library on Earth. This chart shows the number of requests made for a selected list of jQuery versions over the past 12 months:

JavaScript Libraries Are Almost Never Updated Once Installed
Spikes in the CDNJS data as you see with version 3.3.1 are not uncommon as very large sites add and remove CDNJS script tags.

We see a steady rise of version 3.4.1 following its release on May 2nd, 2019. What we don’t see is a substantial decline of old versions. Version 3.2.1 shows an average popularity of 36M requests at the beginning of our sample, and 29M at the end, a decline of approximately 20%. This aligns with a corpus of research which shows the average website lasts somewhere between two and four years. What we don’t see is a decline in our old versions which come close to the volume of growth of new versions when they’re released. In fact the release of 3.4.1, as popular as it quickly becomes, doesn’t change the trend of old version deprecation at all.

If you’re curious, the oldest version of jQuery CDNJS includes is 1.10.0, released on May 25, 2013. The project still gets an average of 100k requests per day, and the sites which use it are growing in popularity:

JavaScript Libraries Are Almost Never Updated Once Installed


To confirm our theory, let’s consider another project, TweenMax:

JavaScript Libraries Are Almost Never Updated Once Installed


As this package isn’t as popular as jQuery, the data has been smoothed with a one week trailing average to make it easier to identify trends.

Version 1.20.4 begins the year with 18M requests, and ends it with 14M, a decline of about 23%, again in alignment with the loss of websites on the Internet. The growth of 2.1.3 shows clear evidence that the release of a new version has almost no bearing on the popularity of old versions, the trend line for those older versions doesn’t change even as 2.1.3 grows to 29M requests per day.

JavaScript Libraries Are Almost Never Updated Once Installed

One conclusion is whatever libraries you publish will exist on websites forever. The underlying web platform consequently must support aged conventions indefinitely if it is to continue supporting the full breadth of the web.

Cloudflare is very interested in how we can contribute to a web which is kept up-to-date. Please make suggestions in the comments below.

01:00

Build your own cloud with Fedora 31 and Nextcloud Server [Fedora Magazine]

Nextcloud is a software suite for storing and syncing your data across multiple devices. You can learn more about Nextcloud Server’s features from https://github.com/nextcloud/server.

This article demonstrates how to build a personal cloud using Fedora and Nextcloud in a few simple steps. For this tutorial you will need a dedicated computer or a virtual machine running Fedora 31 server edition and an internet connection.

Step 1: Install the prerequisites

Before installing and configuring Nextcloud, a few prerequisites must be satisfied.

First, install Apache web server:

# dnf install httpd

Next, install PHP and some additional modules. Make sure that the PHP version being installed meets Nextcloud’s requirements:

# dnf install php php-gd php-mbstring php-intl php-pecl-apcu php-mysqlnd php-pecl-redis php-opcache php-imagick php-zip php-process

After PHP is installed enable and start the Apache web server:

# systemctl enable --now httpd

Next, allow HTTP traffic through the firewall:

# firewall-cmd --permanent --add-service=http
# firewall-cmd --reload

Next, install the MariaDB server and client:

# dnf install mariadb mariadb-server

Then enable and start the MariaDB server:

# systemctl enable --now mariadb

Now that MariaDB is running on your server, you can run the mysql_secure_installation command to secure it:

# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL
      MariaDB SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP
      CAREFULLY!

In order to log into MariaDB to secure it, we'll need the
current password for the root user.  If you've just installed
MariaDB, and you haven't set the root password yet, the password
will be blank, so you should just press enter here.

Enter current password for root (enter for none): <ENTER>
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into
the MariaDB root user without the proper authorization.

Set root password? [Y/n] <ENTER>
New password: Your_Password_Here
Re-enter new password: Your_Password_Here

Password updated successfully!

Reloading privilege tables...
 ... Success!

By default, a MariaDB installation has an anonymous user,
allowing anyone to log into MariaDB without having to have
a user account created for them.  This is intended only for
testing, and to make the installation go a bit smoother.  You
should remove them before moving into a production environment.

Remove anonymous users? [Y/n] <ENTER>
 ... Success!

Normally, root should only be allowed to connect from
'localhost'.  This ensures that someone cannot guess at the
root password from the network.

Disallow root login remotely? [Y/n] <ENTER>
 ... Success!

By default, MariaDB comes with a database named 'test' that
anyone can access.  This is also intended only for testing, and
should be removed before moving into a production environment.

Remove test database and access to it? [Y/n] <ENTER>

 - Dropping test database...
 ... Success!

 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? [Y/n] <ENTER>
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your
MariaDB installation should now be secure.

Thanks for using MariaDB!

Next, create a dedicated user and database for your Nextcloud instance:

# mysql -p
> create database nextcloud;
> create user 'nc_admin'@'localhost' identified by 'SeCrEt';
> grant all privileges on nextcloud.* to 'nc_admin'@'localhost';
> flush privileges;
> exit;

Step 2: Install Nextcloud Server

Now that the prerequisites for your Nextcloud installation have been satisfied, download and unzip the Nextcloud archive:

# wget https://download.nextcloud.com/server/releases/nextcloud-17.0.2.zip
# unzip nextcloud-17.0.2.zip -d /var/www/html/

Next, create a data folder and grant Apache read and write access to the nextcloud directory tree:

# mkdir /var/www/html/nextcloud/data
# chown -R apache:apache /var/www/html/nextcloud

SELinux must be configured to work with Nextcloud. The basic commands are those bellow, but a lot more, by features used on nexcloud installation, are posted here: Nextcloud SELinux configuration

# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/config(/.*)?'
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/apps(/.*)?'
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data(/.*)?'
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.user.ini'
# semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/3rdparty/aws/aws-sdk-php/src/data/logs(/.*)?'
# restorecon -Rv '/var/www/html/nextcloud/'

Step 3: Configure Nextcloud

Nextcloud can be configured using its web interface or from the command line.

Using the web interface

From your favorite browser, access http://your_server_ip/nextcloud and fill the fields:

Using the command line

From the command line, just enter the following, substituting the values you used when you created a dedicated Nextcloud user in MariaDB earlier:

# sudo -u apache php occ maintenance:install --data-dir /var/www/html/nextcloud/data/ --database "mysql" --database-name "nextcloud" --database-user "nc_admin" --database-pass "DB_SeCuRe_PaSsWoRd" --admin-user "admin" --admin-pass "Admin_SeCuRe_PaSsWoRd"

Final Notes

  • I used the http protocol, but Nextcloud also works over https. I might write a follow-up about securing Nextcloud in a future article.
  • I disabled SELinux, but your server will be more secure if you configure it.
  • The recommend PHP memory limit for Nextcloud is 512M. To change it, edit the memory_limit variable in the /etc/php.ini configuration file and restart your httpd service.
  • By default, the web interface can only be accessed using the http://localhost/ URL. If you want to allow access using other domain names, you can do so by editing the /var/www/html/nextcloud/config/config.php file. The * character can be used to bypass the domain name restriction and allow the use of any URL that resolves to one of your server’s IP addresses.
'trusted_domains' =>
    array (
        0 => 'localhost',
        1 => '*',
    ),

— Updated on January 28th, 2020 to include SELinux configuration —

Sunday, 26 January

Friday, 24 January

01:00

Thunderbolt – how to use keyboard during boot time [Fedora Magazine]

Problem statement

Imagine you bought a new laptop with a shiny new USB-C docking station. You install fresh Fedora, encrypt your hard drive because laptop is a travel equipment and you do not want to travel around with non-ecrypted hard drive. You finish the installation, close the lid because you have external monitor, reboot the machine, and finally you would like to enter the LUKS password using the external keyboard attached using USB 2.0 to the USB-C docking station but it does not work!

The keyboard does not respond at all. So you open the lid, try the built-in keyboard which works just fine and once the machine boots the external keyboard works just fine as well. What is the problem?

What is this Thunderbolt anyway and why would anyone want it?

Thunderbolt is a hardware interface to connect peripherals such as monitors, external network cards [1] or even graphic cards [1]. The physical connector is the same as USB-C, but there is usually a label with a little lightning right next to the port to differentiate “plain” USB-C from Thunderbolt ports.

Of course it comes with very high transmission speed to support such demanding peripherals, but it also comes with a certain security risks. To achieve transmission speed like this, Thunderbolt uses Direct Memory Access (DMA) for the peripheral devices. As the name suggests, this method allows the external device to read and write memory directly without talking to the running operating system.

I guess you can already spot the problem here. If some stranger is walking around my laptop (even with the screen locked), is it really possible to just attach a device and read content of my computer memory? Let’s discuss it in more detail.

User facing solution for Thunderbolt security

In the recent versions, Gnome settings include a tab for Thunderbolt device configuration. You can enable and disable DMA access for external devices and you can also verify identity of the devices.

bolt is the component responsible for managing thunderbolt devices. See man 8 boltd for more information.

CLI tools

Of course it is possible to control the same via command line. I suggest you to read man boltctl or check the upstream repository directly: https://gitlab.freedesktop.org/bolt/bolt

Pre-boot support – solution to the keyboard problem

In pre-boot environment, the situation is slightly different. The userspace service responsible for device verification is not yet running so if a device is to be allowed, the firmware must to it. In order to enable this feature go to your BIOS and look for “support in pre boot environment”. For example this is how it looks on a Lenovo laptop:

Once you enable this feature, bolt will add any verified device to a list of allowed devices. The next time you boot your machine, you should be able to use your external keyboard.

Run boltctl a look for “bootacl”. Make sure that the list of allowed devices contains the one you wish to use.

Also note the “security: secure” line. If you see anything else, for instance “security: user” I recommend to reconfigure BIOS.

Technical details of the pre-boot support

There is one unfortunate technical detail about this solution. Thunderbolt support different security levels. For running Fedora, I recommend you to use “secure” level to verify that the device is indeed the one that it claims to be by using a per-device key generated by the host and stored in the device. Firmware, on the other hand, will only use “user” level which uses simple UUID that is provided by the device. The difference is that a malicious device could claim to be a different one by providing the same UUID as a legitimate one. Anyway this should not be a problem as the memory does not contain any sensitive data yet.

You can find more technical details in this blog post: https://christian.kellner.me/2019/02/11/thunderbolt-preboot-access-control-list-support-in-bolt/

Conclusion

As you can see, in recent enough Fedora version the solution is a simple switch in BIOS. So if you are still opening your laptop during boot, go ahead and configure it so you don’t have to do it next time. Meanwhile check that the default security level is “secure” instead of “user” [5].

Sources:

[1] https://www.intel.com/content/www/us/en/products/docs/io/thunderbolt/thunderbolt-technology-developer.html

[2] https://christian.kellner.me/2019/02/11/thunderbolt-preboot-access-control-list-support-in-bolt/

[3] https://gitlab.freedesktop.org/bolt/bolt

[4] https://wiki.gnome.org/Design/Whiteboards/ThunderboltAccess

[5] https://christian.kellner.me/2019/02/27/thunderclap-and-linux/

Thursday, 23 January

Wednesday, 22 January

01:00

Set up an offline command line dictionary in Fedora [Fedora Magazine]

You don’t need an internet connection to have an easily searchable and extendable dictionary on your Fedora computer. You can use sdcv (StarDict under Console Version) and the public Stardict files on the default repositories to keep a local record for offline use. This article shows you how.

What is sdcv?

sdcv is a command line variant of Stardict. Stardict is a part of a long legacy of GUI offline dictionaries. The “dic” files it uses are formatted as a colon delimited file, with the word in first column and the definition in the second column. You can have multiple lines with the same word and different definitions. sdcv will provide you with a search function and formatted display of your results.

Installing sdcv

You can get started quickly with sdcv and the English dictionary by installing them from the default repos:

sudo dnf install sdcv stardict-dic-en

sdcv will be ready for use right away. If you want to see what other languages are available, use this command:

dnf search stardict

How to use sdcv

sdcv has an interactive and non-interactive mode. You can perform a quick search on a word or term using this command:

sdcv word

For example, you could search sdcv linux. Alternately, you can run sdcv by itself to activate interactive mode.

Customizing sdcv

sdcv has a –color option that adds coloring to the words and source of the definition. You can also use an alias to enable –color by default. Simply edit your shell resource file (default on Fedora is ~/.bashrc) to add this command:

alias sdcv="sdcv --color"

You can also use a more friendly name like this: 

alias describe="sdcv --color"

sdcv references /usr/share/stardic/dic by default, or it uses the path located in the shell variable STARDICT_DATA_DIR. You can also set up a personal dictionary in the file $HOME/.stardict/dic.

Fun facts

Believe it or not, the dict network protocol is still alive to this day. You can use it with the curl command by using a command like this to search for a word:

curl dict://dict.org/d:<word>

This pull definitions straight from the internet via your command line. Enjoy using sdcv!


Photo by Pisit Heng on Unsplash.

Tuesday, 21 January

17:00

Streams and Monk – How Yelp is Approaching Kafka in 2020 [Yelp Engineering and Product Blog]

We launched our very first Kafka cluster at Yelp more than five years ago. It was not monitored, did not expose any metrics, and we definitely did not have anyone on call for it. One year later, Kafka had already become one of the most important distributed systems running at Yelp, and today has become one of the core components of our infrastructure. Kafka has come a long way since the 0.8 version we were running back then, and our tooling (some of it open-source) has also significantly improved, increasing reliability and reducing the amount of operational work required to...

Monday, 20 January

01:00

Learning about Partitions and How to Create Them for Fedora [Fedora Magazine]

Operating system distributions try to craft a one size fits all partition layout for their file systems. Distributions cannot know the details about how your hardware is configured or how you use your system though. Do you have more than one storage drive? If so, you might be able to get a performance benefit by putting the write-heavy partitions (var and swap for example) on a separate drive from the others that tend to be more read-intensive since most drives cannot read and write at the same time. Or maybe you are running a database and have a small solid-state drive that would improve the database’s performance if its files are stored on the SSD.

The following sections attempt to describe in brief some of the historical reasons for separating some parts of the file system out into separate partitions so that you can make a more informed decision when you install your Linux operating system.

If you know more (or contradictory) historical details about the partitioning decisions that shaped the Linux operating systems used today, contribute what you know below in the comments section!

Common partitions and why or why not to create them

The boot partition

One of the reasons for putting the /boot directory on a separate partition was to ensure that the boot loader and kernel were located within the first 1024 cylinders of the disk. Most modern computers do not have the 1024 cylinder restriction. So for most people, this concern is no longer relevant. However, modern UEFI-based computers have a different restriction that makes it necessary to have a separate partition for the boot loader. UEFI-based computers require that the boot loader (which can be the Linux kernel directly) be on a FAT-formatted file system. The Linux operating system, however, requires a POSIX-compliant file system that can designate access permissions to individual files. Since FAT file systems do not support access permissions, the boot loader must be on a separate file system than the rest of the operating system on modern UEFI-based computers. A single partition cannot be formatted with more than one type of file system.

The var partition

One of the historical reasons for putting the /var directory on a separate partition was to prevent files that were frequently written to (/var/log/* for example) from filling up the entire drive. Since modern drives tend to be much larger and since other means like log rotation and disk quotas are available to manage storage utilization, putting /var on a separate partition may not be necessary. It is much easier to change a disk quota than it is to re-partition a drive.

Another reason for isolating /var was that file system corruption was much more common in the original version of the Linux Extended File System (EXT). The file systems that had more write activity were much more likely to be irreversibly corrupted by a power outage than those that did not. By partitioning the disk into separate file systems, one could limit the scope of the damage in the event of file system corruption. This concern is no longer as significant because modern file systems support journaling.

The home partition

Having /home on a separate partition makes it possible to re-format the other partitions without overwriting your home directories. However, because modern Linux distributions are much better at doing in-place operating system upgrades, re-formatting shouldn’t be needed as frequently as it might have been in the past.

It can still be useful to have /home on a separate partition if you have a dual-boot setup and want both operating systems to share the same home directories. Or if your operating system is installed on a file system that supports snapshots and rollbacks and you want to be able to rollback your operating system to an older snapshot without reverting the content in your user profiles. Even then, some file systems allow their descendant file systems to be rolled back independently, so it still may not be necessary to have a separate partition for /home. On ZFS, for example, one pool/partition can have multiple descendant file systems.

The swap partition

The swap partition reserves space for the contents of RAM to be written to permanent storage. There are pros and cons to having a swap partition. A pro of having swap memory is that it theoretically gives you time to gracefully shutdown unneeded applications before the OOM killer takes matters into its own hands. This might be important if the system is running mission-critical software that you don’t want abruptly terminated. A con might be that your system runs so slow when it starts swapping memory to disk that you’d rather the OOM killer take care of the problem for you.

Another use for swap memory is hibernation mode. This might be where the rule that the swap partition should be twice the size of your computer’s RAM originated. Ideally, you should be able to put a system into hibernation even if nearly all of its RAM is in use. Beware that Linux’s support for hibernation is not perfect. It is not uncommon that after a Linux system is resumed from hibernation some hardware devices are left in an inoperable state (for example, no video from the video card or no internet from the WiFi card).

In any case, having a swap partition is more a matter of taste. It is not required.

The root partition

The root partition (/) is the catch-all for all directories that have not been assigned to a separate partition. There is always at least one root partition. BIOS-based systems that are new enough to not have the 1024 cylinder limit can be configured with only a root partition and no others so that there is never a need to resize a partition or file system if space requirements change.

The EFI system partition

The EFI System Partition (ESP) serves the same purpose on UEFI-based computers as the boot partition did on the older BIOS-based computers. It contains the boot loader and kernel. Because the files on the ESP need to be accessible by the computer’s firmware, the ESP has a few restrictions that the older boot partition did not have. The restrictions are:

  1. The ESP must be formatted with a FAT file system (vfat in Anaconda)
  2. The ESP must have a special type-code (EF00 when using gdisk)

Because the older boot partition did not have file system or type-code restrictions, it is permissible to apply the above properties to the boot partition and use it as your ESP. Note, however, that the GRUB boot loader does not support combining the boot and ESP partitions. If you use GRUB, you will have to create a separate partition and mount it beneath the /boot directory.

The Boot Loader Specification (BLS) lists several reasons why it is ideal to use the legacy boot partition as your ESP. The reasons include:

  1. The UEFI firmware should be able to load the kernel directly. Having a separate, non-ESP compliant boot partition for the kernel prevents the UEFI firmware from being able to directly load the kernel.
  2. Nesting the ESP mount point three mount levels deep increases the likelihood that an intermediate mount could fail or otherwise be unavailable when needed. That is, requiring root (/), then boot (/boot), then efi (/efi) to be consecutively mounted is unnecessarily complex and prone to error.
  3. Requiring the boot loader to be able to read other partitions/disks which may be formatted with arbitrary file systems is non-trivial. Even when the boot loader does contain such code, the code that works at installation time can become outdated and fail to access the kernel/initrd after a file system update. This is currently true of GRUB’s ZFS file system driver, for example. You must be careful not to update your ZFS file system if you use the GRUB boot loader or else your system may not come back up the next time you reboot.

Besides the concerns listed above, it is a good idea to have your startup environment — up to and including your initramfs — on a single self-contained file system for recovery purposes. Suppose, for example, that you need to rollback your root file system because it has become corrupted or it has become infected with malware. If your kernel and initramfs are on the root file system, you may be unable to perform the recovery. By having the boot loader, kernel, and initramfs all on a single file system that is rarely accessed or updated, you can increase your chances of being able to recover the rest of your system.

In summary, there are many ways that you can layout your partitions and the type of hardware (BIOS or UEFI) and the brand of boot loader (GRUB, Syslinux or systemd-boot) are among the factors that will influence which layouts will work.

Other considerations

MBR vs. GPT

GUID Partition Table (GPT) is the newer partition format that supports larger disks. GPT was designed to work with the newer UEFI firmware. It is backward-compatible with the older Master Boot Record (MBR) partition format but not all boot loaders support the MBR boot method. GRUB and Syslinux support both MBR and UEFI, but systemd-boot only supports the newer UEFI boot method.

By using GPT now, you can increase the likelihood that your storage device, or an image of it, can be transferred over to a newer computer in the future should you wish to do so. If you have an older computer that natively supports only MBR-partitioned drives, you may need to add the inst.gpt parameter to Anaconda when starting the installer to get it to use the newer format. How to add the inst.gpt parameter is shown in the below video titled “Partitioning a BIOS Computer”.

If you use the GPT partition format on a BIOS-based computer, and you use the GRUB boot loader, you must additionally create a one megabyte biosboot partition at the start of your storage device. The biosboot partition is not needed by any other brand of boot loader. How to create the biosboot partition is demonstrated in the below video titled “Partitioning a BIOS Computer”.

LVM

One last thing to consider when manually partitioning your Linux system is whether to use standard partitions or logical volumes. Logical volumes are managed by the Logical Volume Manager (LVM). You can setup LVM volumes directly on your disk without first creating standard partitions to hold them. However, most computers still require that the boot partition be a standard partition and not an LVM volume. Consequently, having LVM volumes only increases the complexity of the system because the LVM volumes must be created within standard partitions.

The main features of LVM — online storage resizing and clustering — are not really applicable to the typical end user. Most laptops do not have hot-swappable drive bays for adding or reconfiguring storage while the system is running. And not many laptop or desktop users have clvmd configured so they can access a centralized storage device concurrently from multiple client computers.

LVM is great for servers and clusters. But it adds extra complexity for the typical end user. Go with standard partitions unless you are a server admin who needs the more advanced features.

Video demonstrations

Now that you know which partitions you need, you can watch the sort video demonstrations below to see how to manually partition a Fedora Linux computer from the Anaconda installer.

These videos demonstrate creating only the minimally required partitions. You can add more if you choose.

Because the GRUB boot loader requires a more complex partition layout on UEFI systems, the below video titled “Partitioning a UEFI Computer” additionally demonstrates how to install the systemd-boot boot loader. By using the systemd-boot boot loader, you can reduce the number of needed partitions to just two — boot and root. How to use a boot loader other than the default (GRUB) with Fedora’s Anaconda installer is officially documented here.

Partitioning a UEFI Computer
Partitioning a BIOS Computer

Sunday, 19 January

Friday, 17 January

03:40

Fedora CoreOS out of preview [Fedora Magazine]

The Fedora CoreOS team is pleased to announce that Fedora CoreOS is now available for general use. Here are some more details about this exciting delivery.

Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. It’s the successor to both Fedora Atomic Host and CoreOS Container Linux and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the announcement of the preview release.

Some highlights of the current Fedora CoreOS release:

  • Automatic updates, with staged deployments and phased rollouts
  • Built from Fedora 31, featuring:
    • Linux 5.4
    • systemd 243
    • Ignition 2.1
  • OCI and Docker Container support via Podman 1.7 and Moby 18.09
  • cgroups v1 enabled by default for broader compatibility; cgroups v2 available via configuration

Fedora CoreOS is available on a variety of platforms:

  • Bare metal, QEMU, OpenStack, and VMware
  • Images available in all public AWS regions
  • Downloadable cloud images for Alibaba, AWS, Azure, and GCP
  • Can run live from RAM via ISO and PXE (netboot) images

Fedora CoreOS is under active development.  Planned future enhancements include:

  • Addition of the next release stream for extended testing of upcoming Fedora releases.
  • Support for additional cloud and virtualization platforms, and processor architectures other than x86_64.
  • Closer integration with Kubernetes distributions, including OKD.
  • Aggregate statistics collection.
  • Additional documentation.

Where do I get it?

To try out the new release, head over to the download page to get OS images or cloud image IDs.  Then use the quick start guide to get a machine running quickly.

How do I get involved?

It’s easy!  You can report bugs and missing features to the issue tracker. You can also discuss Fedora CoreOS in Fedora Discourse, the development mailing list, in #fedora-coreos on Freenode, or at our weekly IRC meetings.

Are there stability guarantees?

In general, the Fedora Project does not make any guarantees around stability.  While Fedora CoreOS strives for a high level of stability, this can be challenging to achieve in the rapidly evolving Linux and container ecosystems.  We’ve found that the incremental, exploratory, forward-looking development required for Fedora CoreOS — which is also a cornerstone of the Fedora Project as a whole — is difficult to reconcile with the iron-clad stability guarantee that ideally exists when automatically updating systems.

We’ll continue to do our best not to break existing systems over time, and to give users the tools to manage the impact of any regressions.  Nevertheless, automatic updates may produce regressions or breaking changes for some use cases. You should make your own decisions about where and how to run Fedora CoreOS based on your risk tolerance, operational needs, and experience with the OS.  We will continue to announce any major planned or unplanned breakage to the coreos-status mailing list, along with recommended mitigations.

How do I migrate from CoreOS Container Linux?

Container Linux machines cannot be migrated in place to Fedora CoreOS.  We recommend writing a new Fedora CoreOS Config to provision Fedora CoreOS machines.  Fedora CoreOS Configs are similar to Container Linux Configs, and must be passed through the Fedora CoreOS Config Transpiler to produce an Ignition config for provisioning a Fedora CoreOS machine.

Whether you’re currently provisioning your Container Linux machines using a Container Linux Config, handwritten Ignition config, or cloud-config, you’ll need to adjust your configs for differences between Container Linux and Fedora CoreOS.  For example, on Fedora CoreOS network configuration is performed with NetworkManager key files instead of systemd-networkd, and time synchronization is performed by chrony rather than systemd-timesyncd.  Initial migration documentation will be available soon and a skeleton list of differences between the two OSes is available in this issue.

CoreOS Container Linux will be maintained for a few more months, and then will be declared end-of-life.  We’ll announce the exact end-of-life date later this month.

How do I migrate from Fedora Atomic Host?

Fedora Atomic Host has already reached end-of-life, and you should migrate to Fedora CoreOS as soon as possible.  We do not recommend in-place migration of Atomic Host machines to Fedora CoreOS. Instead, we recommend writing a Fedora CoreOS Config and using it to provision new Fedora CoreOS machines.  As with CoreOS Container Linux, you’ll need to adjust your existing cloud-configs for differences between Fedora Atomic Host and Fedora CoreOS.

Welcome to Fedora CoreOS.  Deploy it, launch your apps, and let us know what you think!

Thursday, 16 January

09:13

Announcing the Cloudflare Access App Launch [The Cloudflare Blog]

Announcing the Cloudflare Access App Launch
Announcing the Cloudflare Access App Launch

Every person joining your team has the same question on Day One: how do I find and connect to the applications I need to do my job?

Since launch, Cloudflare Access has helped improve how users connect to those applications. When you protect an application with Access, users never have to connect to a private network and never have to deal with a clunky VPN client. Instead, they reach on-premise apps as if they were SaaS tools. Behind the scenes, Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Administrators need about an hour to deploy Access. End user logins take about 20 ms, and that response time is consistent globally. Unlike VPN appliances, Access runs in every data center in Cloudflare’s network in 200 cities around the world. When Access works well, it should be easy for administrators and invisible to the end user.

However, users still need to locate the applications behind Access, and for internally managed applications, traditional dashboards require constant upkeep. As organizations grow, that roster of links keeps expanding. Department leads and IT administrators can create and publish manual lists, but those become a chore to maintain. Teams need to publish custom versions for contractors or partners that only make certain tools visible.

Starting today, teams can use Cloudflare Access to solve that challenge. We’re excited to announce the first feature in Access built specifically for end users: the Access App Launch portal.

The Access App Launch is a dashboard for all the applications protected by Access. Once enabled, end users can login and connect to every app behind Access with a single click.

How does it work?

When administrators secure an application with Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

To check identity, Access relies on the identity provider that the team already uses. Access integrates with providers like OneLogin, Okta, AzureAD, G Suite and others to determine who a user is. If the user has not logged in yet, Access will prompt them to do so at the identity provider configured.

Announcing the Cloudflare Access App Launch

When the user logs in, they are redirected through a subdomain unique to each Access account. Access assigns that subdomain based on a hostname already active in the account. For example, an account with the hostname “widgetcorp.tech” will be assigned “widgetcorp.cloudflareaccess.com”.

Announcing the Cloudflare Access App Launch

The Access App Launch uses the unique subdomain assigned to each Access account. Now, when users visit that URL directly, Cloudflare Access checks their identity and displays only the applications that the user has permission to reach. When a user clicks on an application, they are redirected to the application behind it. Since they are already authenticated, they do not need to login again.

In the background, the Access App Launch decodes and validates the token stored in the cookie on the account’s subdomain.

How is it configured?

The Access App Launch can be configured in the Cloudflare dashboard in three steps. First, navigate to the Access tab in the dashboard. Next, enable the feature in the “App Launch Portal” card. Finally, define who should be able to use the Access App Launch in the modal that appears and click “Save”. Permissions to use the Access App Launch portal do not impact existing Access policies for who can reach protected applications.

Announcing the Cloudflare Access App Launch

Administrators do not need to manually configure each application that appears in the portal. Access App Launch uses the policies already created in the account to generate a page unique to each individual user, automatically.

Defense-in-depth against phishing attacks

Phishing attacks attempt to trick users by masquerading as a legitimate website. In the case of business users, team members think they are visiting an authentic application. Instead, an attacker can present a spoofed version of the application at a URL that looks like the real thing.

Take “example.com” vs “examрle.com” - they look identical, but one uses the Cyrillic “р” and becomes an entirely different hostname. If an attacker can lure a user to visit “examрle.com”, and make the site look like the real thing, that user could accidentally leak credentials or information.

Announcing the Cloudflare Access App Launch

To be successful, the attacker needs to get the victim to visit that fraudulent URL. That frequently happens via email from untrusted senders.

The Access App Launch can help prevent these attacks from targeting internal tools. Teams can instruct users to only navigate to internal applications through the Access App Launch dashboard. When users select a tile in the page, Access will send users to that application using the organization’s SSO.

Cloudflare Gateway can take it one step further. Gateway’s DNS resolver filtering can help defend from phishing attacks that utilize sites that resemble legitimate applications that do not sit behind Access. To learn more about adding Gateway, in conjunction with Access, sign up to join the beta here.

What’s next?

As part of last week’s announcement of Cloudflare for Teams, the Access App Launch is now available to all Access customers today. You can get started with instructions here.

Interested in learning more about Cloudflare for Teams? Read more about the announcement and features here.

Wednesday, 15 January

17:00

Automated IDOR Discovery through Stateful Swagger Fuzzing [Yelp Engineering and Product Blog]

Scaling security coverage in a growing company is hard. The only way to do this effectively is to empower front-line developers to be able to easily discover, triage, and fix vulnerabilities before they make it to production servers. Today, we’re excited to announce that we’ll be open-sourcing fuzz-lightyear: a testing framework we’ve developed to identify Insecure Direct Object Reference (IDOR) vulnerabilities through stateful Swagger fuzzing, tailored to support an enterprise, microservice architecture. This integrates with our Continuous Integration (CI) pipeline to provide consistent, automatic test coverage as web applications evolve. The Problem As a class of vulnerabilities, IDOR is arguably...

05:30

Introducing Cloudflare for Campaigns [The Cloudflare Blog]

Introducing Cloudflare for Campaigns
Introducing Cloudflare for Campaigns

During the past year, we saw nearly 2 billion global citizens go to the polls to vote in democratic elections. There were major elections in more than 50 countries, including India, Nigeria, and the United Kingdom, as well as elections for the European Parliament. In 2020, we will see a similar number of elections in countries from Peru to Myanmar. In November, U.S citizens will cast their votes for the 46th President, 435 seats in the U.S House of Representatives, 35 of the 100 seats in the U.S. Senate, and many state and local elections.

Recognizing the importance of maintaining public access to election information, Cloudflare launched the Athenian Project in 2017, providing U.S. state and local government entities with the tools needed to secure their election websites for free. As we’ve seen, however, political parties and candidates for office all over the world are also frequent targets for cyberattack. Cybersecurity needs for campaign websites and internal tools are at an all time high.

Although Cloudflare has helped improve the security and performance of political parties and candidates for office all over the world for years, we’ve long felt that we could do more. So today, we’re announcing Cloudflare for Campaigns, a suite of Cloudflare services tailored to campaign needs. Cloudflare for Campaigns is designed to make it easier for all political campaigns and parties, especially those with small teams and limited resources, to get access to cybersecurity services.

Risks faced by political campaigns

Since Russians attempted to use cyberattacks to interfere in the U.S. Presidential election in 2016, the news has been filled with reports of cyber threats against political campaigns, in both the United States and around the world. Hackers targeted the Presidential campaigns of Emmanuel Macron in France and Angela Merkel in Germany with phishing attacks, the main political parties in the UK with DDoS attacks, and congressional campaigns in California with a combination of malware, DDoS attacks and brute force login attempts.

Both because of our services to state and local government election websites through the Athenian Project and because a significant number of political parties and candidates for office use our services, Cloudflare has seen many attacks on election infrastructure and political campaigns firsthand.

During the 2020 U.S. election cycle, Cloudflare has provided services to 18 major presidential campaigns, as well as a range of congressional campaigns. On a typical day, Cloudflare blocks 400,000 attacks against political campaigns, and, on a busy day, Cloudflare blocks more than 40 million attacks against campaigns.

What is Cloudflare for Campaigns?

Cloudflare for Campaigns is a suite of Cloudflare products focused on the needs of political campaigns, particularly smaller campaigns that don’t have the resources to bring significant cybersecurity resources in house. To ensure the security of a campaign website, the Cloudflare for Campaigns package includes Business-level service, as well as security tools particularly helpful for political campaigns websites, such as the web application firewall, rate limiting, load balancing, Enterprise level “I am Under Attack Support”, bot management, and multi-user account enablement.

Introducing Cloudflare for Campaigns

To ensure the security of internal campaign teams, the Cloudflare for Campaigns service will also provide tools for campaigns to ensure the security of their internal teams with Cloudflare Access, allowing for campaigns to secure, authenticate, and monitor user access to any domain, application, or path on Cloudflare, without using a VPN. Along with Access, we will be providing Cloudflare Gateway with DNS-based filtering at multiple locations to protect campaign staff as they navigate the Internet by keeping malicious content off the campaign’s network using DNS filtering, helping prevent users from running into phishing scams or malware sites. Campaigns can use Gateway after the product’s public release.

Cloudflare for Campaigns also includes Cloudflare reliability and security guide, which lists a best practice guide for political campaigns to maintain their campaign site and secure their internal teams.

Regulatory Challenges

Although there is widespread agreement that campaigns and political parties face threats of cyberattack, there is less consensus on how best to get political campaigns the help they need.  Many political campaigns and political parties operate under resource constraints, without the technological capability and financial resources to dedicate to cybersecurity. At the same time, campaigns around the world are the subject of a variety of different regulations intended to prevent corruption of democratic processes. As a practical matter, that means that, although campaigns may not have the resources needed to access cybersecurity services, donation of cybersecurity services to campaigns may not always be allowed.

In the U.S., campaign finance regulations prohibit corporations from providing any contributions of either money or services to federal candidates or political party organizations. These rules prevent companies from offering free or discounted services if those services are not provided on the same terms and conditions to similarly situated members of the general public. The Federal Elections Commission (FEC), which enforces U.S. campaign finance laws, has struggled with the issue of how best to apply those rules to the provision of free or discounted cybersecurity services to campaigns. In consideration of a number of advisory opinions, they have publicly wrestled with the competing priorities of securing campaigns from cyberattack while not opening a backdoor to donation of goods services that are intended to curry favors with particular candidates.

The FEC has issued two advisory opinions to tech companies seeking to provide free or discounted cybersecurity services to campaigns. In 2018, the FEC approved a request by Microsoft to offer a package of enhanced online account security protections for “election-sensitive” users. The FEC reasoned that Microsoft was offering the services to its paid users “based on commercial rather than political considerations, in the ordinary course of its business and not merely for promotional consideration or to generate goodwill.” In July 2019, the FEC approved a request by a cybersecurity company to provide low-cost anti-phishing services to campaigns because those services would be provided in the ordinary course of business and on the same terms and conditions as offered to similarly situated non-political clients.

In September 2018, a month after Microsoft submitted its request, Defending Digital Campaigns (DDC), a nonprofit established with the mission to “secure our democratic campaign process by providing eligible campaigns and political parties, committees, and related organizations with knowledge, training, and resources to defend themselves from cyber threats,” submitted a request to the FEC to offer free or reduced-cost cybersecurity services, including from technology corporations, to federal candidates and parties. Over the following months, the FEC issued and requested comment on multiple draft opinions on whether the donation was permissible and, if so, on what basis. As described by the FEC, to support its position, DDC represented that “federal candidates and parties are singularly ill-equipped to counteract these threats.” The FEC’s advisory opinion to DDC noted:

“You [DDC] state that presidential campaign committees and national party committees require expert guidance on cybersecurity and you contend that the 'vast majority of campaigns' cannot afford full-time cybersecurity staff and that 'even basic cybersecurity consulting software and services' can overextend the budgets of most congressional campaigns. AOR004. For instance, you note that a congressional candidate in California reported a breach to the Federal Bureau of Investigation (FBI) in March of this year but did not have the resources to hire a professional cybersecurity firm to investigate the attack, or to replace infected computers. AOR003.”

In May 2019, the FEC approved DDC’s request to partner with technology companies to provide free and discounted cybersecurity services “[u]nder the unusual and exigent circumstances” presented by the request and “in light of the demonstrated, currently enhanced threat of foreign cyberattacks against party and candidate committees.”

All of these opinions demonstrate the FEC’s desire to allow campaigns to access affordable cybersecurity services because of the heightened threat of cyberattack, while still being cautious to ensure that those services are offered transparently and consistent with the goals of campaign finance laws.

Partnering with DDC to Provide Free Services to US Candidates

We share the view of both DDC and the FEC that political campaigns -- which are central to our democracy -- must have the tools to protect themselves against foreign cyberattack. Cloudflare is therefore excited to announce a new partnership with DDC to provide Cloudflare for Campaigns for free to candidates and parties that meet DDC’s criteria.

Introducing Cloudflare for Campaigns

To receive free services under DDC, political campaigns must meet the following criteria, as the DDC laid out to the FEC:

  • A House candidate’s committee that has at least $50,000 in receipts for the current election cycle, and a Senate candidate’s committee that has at least $100,000 in receipts for the current election cycle;
  • A House or Senate candidate’s committee for candidates who have qualified for the general election ballot in their respective elections; or
  • Any presidential candidate’s committee whose candidate is polling above five percent in national polls.

For more information on eligibility for these services under DDC and the next steps, please visit cloudflare.com/campaigns/usa.

Election package

Although political campaigns are regulated differently all around the world, Cloudflare believes that the integrity of all political campaigns should be protected against powerful adversaries. With this in mind, Cloudflare will therefore also be offering Cloudflare for Campaigns as a paid service, designed to help campaigns all around the world as we attempt to address regulatory hurdles. For more information on how to sign up for the Cloudflare election package, please visit cloudflare.com/campaigns.






01:00

Develop GUI apps using Flutter on Fedora [Fedora Magazine]

When it comes to app development frameworks, Flutter is the latest and greatest. Google seems to be planning to take over the entire GUI app development world with Flutter, starting with mobile devices, which are already perfectly supported. Flutter allows you to develop cross-platform GUI apps for multiple targets — mobile, web, and desktop — from a single codebase.

This post will go through how to install the Flutter SDK and tools on Fedora, as well as how to use them both for mobile development and web/desktop development.

Installing Flutter and Android SDKs on Fedora

To get started building apps with Flutter, you need to install

  • the Android SDK;
  • the Flutter SDK itself; and,
  • optionally, an IDE and its Flutter plugins.

Installing the Android SDK

Flutter requires the installation of the Android SDK with the entire Android Studio suite of tools. Google provides a tar.gz archive. The Android Studio executable can be found in the android-studio/bin directory and is called studio.sh. To run it, open a terminal, cd into the aforementioned directory, and then run:

$ ./studio.sh

Installing the Flutter SDK

Before you install Flutter you may want to consider what release channel you want to be on.

The stable channel is least likely to give you a headache if you just want to build a mobile app using mainstream Flutter features.

On the other hand, you may want to use the latest features, especially for desktop and web app development. In that case, you might be better off installing either the latest version of the beta or even the dev channel.

Either way, you can switch between channels after you install using the flutter channel command explained later in the article.

Head over to the official SDK archive page and download the latest installation bundle for the release channel most appropriate for your use case.

The installation bundle is simply a xz-compressed tarball (.tar.xz extension). You can extract it wherever you want, given that you add the flutter/bin subdirectory to the PATH environment variable.

Installing the IDE plugins

To install the plugin for Visual Studio Code, you need to search for Flutter in the Extensions tab. Installing it will also install the Dart plugin.

The same will happen when you install the plugin for Android Studio by opening the Settings, then the Plugins tab and installing the Flutter plugin.

Using the Flutter and Android CLI Tools on Fedora

Now that you’ve installed Flutter, here’s how to use the CLI tool.

Upgrading and Maintaining Your Flutter Installations

The flutter doctor command is used to check whether your installation and related tools are complete and don’t require any further action.

For example, the output you may get from flutter doctor right after installing on Fedora is:

Doctor summary (to see all details, run flutter doctor -v):

[✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Linux, locale it_IT.UTF-8)

[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)

    ✗ Android licenses not accepted.  To resolve this, run: flutter doctor --android-licenses

[!] Android Studio (version 3.5)

    ✗ Flutter plugin not installed; this adds Flutter specific functionality.

    ✗ Dart plugin not installed; this adds Dart specific functionality.

[!] Connected device

    ! No devices available

! Doctor found issues in 3 categories.

Of course the issue with the Android toolchain has to be resolved in order to build for Android. Run this command to accept the licenses:

$ flutter doctor --android-licenses

Use the flutter channel command to switch channels after installation. It’s just like switching branches on Git (and that’s actually what it does). You use it in the following way:

$ flutter channel <channel_name>

…where you’d replace <channel_name> with the release channel you want to switch to.

After doing that, or whenever you feel the need to do it, you need to update your installation. You might consider running this every once in a while or when a major update comes out if you follow Flutter news. Run this command:

$ flutter upgrade

Building for Mobile

You can build for Android very easily: the flutter build command supports it by default, and it allows you to build both APKs and newfangled app bundles.

All you need to do is to create a project with flutter create, which will generate some code for an example app and the necessary android and ios folders.

When you’re done coding you can either run:

  • flutter build apk or flutter build appbundle to generate the necessary app files to distribute, or
  • flutter run to run the app on a connected device or emulator directly.

When you run the app on a phone or emulator with flutter run, you can use the R button on the keyboard to use stateful hot reload. This feature updates what’s displayed on the phone or emulator to reflect the changes you’ve made to the code without requiring a full rebuild.

If you input a capital R character to the debug console, you trigger a hot restart. This restart doesn’t preserve state and is necessary for bigger changes to the app.

If you’re using a GUI IDE, you can trigger a hot reload using the bolt icon button and a hot restart with the typical refresh button.

Building for the Desktop

To build apps for the desktop on Fedora, use the flutter-desktop-embedding repository. The flutter create command doesn’t have templates for desktop Linux apps yet. That repository contains examples of desktop apps and files required to build on desktop, as well as examples of plugins for desktop apps.

To build or run apps for Linux, you also need to be on the master release channel and enable Linux desktop app development. To do this, run:

$ flutter config --enable-linux-desktop

After that, you can use flutter run to run the app on your development workstation directly, or run flutter build linux to build a binary file in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the linux/ directory:

$ flutter create .

Building for the Web

Starting with Flutter 1.12, you can build Web apps using Flutter with the mainline codebase, without having to use the flutter_web forked libraries, but you have to be running on the beta channel.

If you are (you can switch to it using flutter channel beta and flutter upgrade as we’ve seen earlier), you need to enable web development by running flutter config –enable-web.

After doing that, you can run flutter run -d web and a local web server will be started from which you can access your app. The command returns the URL at which the server is listening, including the port number.

You can also run flutter build web to build the static website files in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the web/ directory:

$ flutter create .

Packages for Installing Flutter

Other distributions have packages or community repositories to install and update in a more straightforward and intuitive way. However, at the time of writing, no such thing exists for Flutter. If you have experience packaging RPMs for Fedora, consider contributing to this GitHub repository for this COPR package.

The next step is learning Flutter. You can do that in a number of ways:

  • Read the good API reference documentation on the official site
  • Watching some of the introductory video courses available online
  • Read one of the many books out there today. [Check out the author’s bio for a suggestion! — Ed.]

Photo by Randall Ruiz on Unsplash.

Tuesday, 14 January

09:07

A cost-effective and extensible testbed for transport protocol development [The Cloudflare Blog]

A cost-effective and extensible testbed for transport protocol development

This was originally published on Perf Planet's 2019 Web Performance Calendar.

At Cloudflare, we develop protocols at multiple layers of the network stack. In the past, we focused on HTTP/1.1, HTTP/2, and TLS 1.3. Now, we are working on QUIC and HTTP/3, which are still in IETF draft, but gaining a lot of interest.

QUIC is a secure and multiplexed transport protocol that aims to perform better than TCP under some network conditions. It is specified in a family of documents: a transport layer which specifies packet format and basic state machine, recovery and congestion control, security based on TLS 1.3, and an HTTP application layer mapping, which is now called HTTP/3.

Let’s focus on the transport and recovery layer first. This layer provides a basis for what is sent on the wire (the packet binary format) and how we send it reliably. It includes how to open the connection, how to handshake a new secure session with the help of TLS, how to send data reliably and how to react when there is packet loss or reordering of packets. Also it includes flow control and congestion control to interact well with other transport protocols in the same network. With confidence in the basic transport and recovery layer,  we can take a look at higher application layers such as HTTP/3.

To develop such a transport protocol, we need multiple stages of the development environment. Since this is a network protocol, it’s best to test in an actual physical network to see how works on the wire. We may start the development using localhost, but after some time we may want to send and receive packets with other hosts. We can build a lab with a couple of virtual machines, using Virtualbox, VMWare or even with Docker. We also have a local testing environment with a Linux VM. But sometimes these have a limited network (localhost only) or are noisy due to other processes in the same host or virtual machines.

Next step is to have a test lab, typically an isolated network focused on protocol analysis only consisting of dedicated x86 hosts. Lab configuration is particularly important for testing various cases - there is no one-size-fits-all scenario for protocol testing. For example, EDGE is still running in production mobile networks but LTE is dominant and 5G deployment is in early stages. WiFi is very common these days. We want to test our protocol in all those environments. Of course, we can't buy every type of machine or have a very expensive network simulator for every type of environment, so using cheap hardware and an open source OS where we can configure similar environments is ideal.

The QUIC Protocol Testing lab

The goal of the QUIC testing lab is to aid transport layer protocol development. To develop a transport protocol we need to have a way to control our network environment and a way to get as many different types of debugging data as possible. Also we need to get metrics for comparison with other protocols in production.

The QUIC Testing Lab has the following goals:

  • Help with multiple transport protocol development: Developing a new transport layer requires many iterations, from building and validating packets as per protocol spec, to making sure everything works fine under moderate load, to very harsh conditions such as low bandwidth and high packet loss. We need a way to run tests with various network conditions reproducibly in order to catch unexpected issues.
  • Debugging multiple transport protocol development: Recording as much debugging info as we can is important for fixing bugs. Looking into packet captures definitely helps but we also need a detailed debugging log of the server and client to understand the what and why for each packet. For example, when a packet is sent, we want to know why. Is this because there is an application which wants to send some data? Or is this a retransmit of data previously known as lost? Or is this a loss probe which is not an actual packet loss but sent to see if the network is lossy?
  • Performance comparison between each protocol: We want to understand the performance of a new protocol by comparison with existing protocols such as TCP, or with a previous version of the protocol under development. Also we want to test with varying parameters such as changing the congestion control mechanism, changing various timeouts, or changing the buffer sizes at various levels of the stack.
  • Finding a bottleneck or errors easily: Running tests we may see an unexpected error - a transfer that timed out, or ended with an error, or a transfer was corrupted at the client side - each test needs to make sure every test is run correctly, by using a checksum of the original file to compare with what is actually downloaded, or by checking various error codes at the protocol of API level.

When we have a test lab with separate hardware, we have benefits, as follows:

  • Can configure the testing lab without public Internet access - safe and quiet.
  • Handy access to hardware and its console for maintenance purpose, or for adding or updating hardware.
  • Try other CPU architectures. For clients we use the Raspberry Pi for regular testing because this is ARM architecture (32bit or 64bit), similar to modern smartphones. So testing with ARM architecture helps for compatibility testing before going into a smartphone OS.
  • We can add a real smartphone for testing, such as Android or iPhone. We can test with WiFi but these devices also support Ethernet, so we can test them with a wired network for better consistency.

Lab Configuration

Here is a diagram of our QUIC Protocol Testing Lab:

A cost-effective and extensible testbed for transport protocol development

This is a conceptual diagram and we need to configure a switch for connecting each machine. Currently, we have Raspberry Pis (2 and 3) as an Origin and a Client. And small Intel x86 boxes for the Traffic Shaper and Edge server plus Ethernet switches for interconnectivity.

  • Origin is simply serving HTTP and HTTPS test objects using a web server. Client may download a file from Origin directly to simulate a download direct from a customer's origin server.
  • Client will download a test object from Origin or Edge, using a different protocol. In typical a configuration Client connects to Edge instead of Origin, so this is to simulate an edge server in the real world. For TCP/HTTP we are using the curl command line client and for QUIC, quiche’s http3_client with some modification.
  • Edge is running Cloudflare's web server to serve HTTP/HTTPS via TCP and also the QUIC protocol using quiche. Edge server is installed with the same Linux kernel used on Cloudflare's production machines in order to have the same low level network stack.
  • Traffic Shaper is sitting between Client and Edge (and Origin), controlling network conditions. Currently we are using FreeBSD and ipfw + dummynet. Traffic shaping can also be done using Linux' netem which provides additional network simulation features.

The goal is to run tests with various network conditions, such as bandwidth, latency and packet loss upstream and downstream. The lab is able to run a plaintext HTTP test but currently our focus of testing is HTTPS over TCP and HTTP/3 over QUIC. Since QUIC is running over UDP, both TCP and UDP traffic need to be controlled.

Test Automation and Visualization

In the lab, we have a script installed in Client, which can run a batch of testing with various configuration parameters - for each test combination, we can define a test configuration, including:

  • Network Condition - Bandwidth, Latency, Packet Loss (upstream and downstream)

For example using netem traffic shaper we can simulate LTE network as below,(RTT=50ms, BW=22Mbps upstream and downstream, with BDP queue size)

$ tc qdisc add dev eth0 root handle 1:0 netem delay 25ms
$ tc qdisc add dev eth0 parent 1:1 handle 10: tbf rate 22mbit buffer 68750 limit 70000
  • Test Object sizes - 1KB, 8KB, … 32MB
  • Test Protocols: HTTPS (TCP) and QUIC (UDP)
  • Number of runs and number of requests in a single connection

The test script outputs a CSV file of results for importing into other tools for data processing and visualization - such as Google Sheets, Excel or even a jupyter notebook. Also it’s able to post the result to a database (Clickhouse in our case), so we can query and visualize the results.

Sometimes a whole test combination takes a long time - the current standard test set with simulated 2G, 3G, LTE, WiFi and various object sizes repeated 10 times for each request may take several hours to run. Large object testing on a slow network takes most of the time, so sometimes we also need to run a limited test (e.g. testing LTE-like conditions only for a sanity check) for quick debugging.

Chart using Google Sheets:

The following comparison chart shows the total transfer time in msec for TCP vs QUIC for different network conditions. The QUIC protocol used here is a development version one.

A cost-effective and extensible testbed for transport protocol development

Debugging and performance analysis using of a smartphone

Mobile devices have become a crucial part of our day to day life, so testing the new transport protocol on mobile devices is critically important for mobile app performance. To facilitate that, we need to have a mobile test app which will proxy data over the new transport protocol under development. With this we have the ability to analyze protocol functionality and performance in mobile devices with different network conditions.

Adding a smartphone to the testbed mentioned above gives an advantage in terms of understanding real performance issues. The major smartphone operating systems, iOS and Android, have quite different networking stack. Adding a smartphone to testbed gives the ability to understand these operating system network stacks in depth which aides new protocol designs.

A cost-effective and extensible testbed for transport protocol development

The above figure shows the network block diagram of another similar lab testbed used for protocol testing where a smartphone is connected both wired and wirelessly. A Linux netem based traffic shaper sits in-between the client and server shaping the traffic. Various networking profiles are fed to the traffic shaper to mimic real world scenarios. The client can be either an Android or iOS based smartphone, the server is a vanilla web server serving static files. Client, server and traffic shaper are all connected to the Internet along with the private lab network for management purposes.

The above lab has mobile devices for both Android or iOS  installed with a test app built with a proprietary client proxy software for proxying data over the new transport protocol under development. The test app also has the ability to make HTTP requests over TCP for comparison purposes.

The Android or iOS test app can be used to issue multiple HTTPS requests of different object sizes sequentially and concurrently using TCP and QUIC as underlying transport protocol. Later, TTOTAL (total transfer time) of each HTTPS request is used to compare TCP and QUIC performance over different network conditions. One such comparison is shown below,

A cost-effective and extensible testbed for transport protocol development

The table above shows the total transfer time taken for TCP and QUIC requests over an LTE network profile fetching different objects with different concurrency levels using the test app. Here TCP goes over native OS network stack and QUIC goes over Cloudflare QUIC stack.

Debugging network performance issues is hard when it comes to mobile devices. By adding an actual smartphone into the testbed itself we have the ability to take packet captures at different layers. These are very critical in analyzing and understanding protocol performance.

It's easy and straightforward to capture packets and analyze them using the tcpdump tool on x86 boxes, but it's a challenge to capture packets on iOS and Android devices. On iOS device ‘rvictl’ lets us capture packets on an external interface. But ‘rvictl’ has some drawbacks such as timestamps being inaccurate. Since we are dealing with millisecond level events, timestamps need to be accurate to analyze the root cause of a problem.

We can capture packets on internal loopback interfaces on jailbroken iPhones and rooted Android devices. Jailbreaking a recent iOS device is nontrivial. We also need to make sure that autoupdate of any sort is disabled on such a phone otherwise it would disable the jailbreak and you have to start the whole process again. With a jailbroken phone we have root access to the device which lets us take packet captures as needed using tcpdump.

Packet captures taken using jailbroken iOS devices or rooted Android devices connected to the lab testbed help us analyze  performance bottlenecks and improve protocol performance.

iOS and Android devices different network stacks in their core operating systems. These packet captures also help us understand the network stack of these mobile devices, for example in iOS devices packets punted through loopback interface had a mysterious delay of 5 to 7ms.

Conclusion

Cloudflare is actively involved in helping to drive forward the QUIC and HTTP/3 standards by testing and optimizing these new protocols in simulated real world environments. By simulating a wide variety of networks we are working on our mission of Helping Build a Better Internet. For everyone, everywhere.

Would like to thank SangJo Lee, Hiren Panchasara, Lucas Pardue and Sreeni Tellakula for their contributions.

Monday, 13 January

02:00

How to setup a DNS server with bind [Fedora Magazine]

The Domain Name System, or DNS, as it’s more commonly known, translates or converts domain names into the IP addresses associated with that domain. DNS is the reason you are able to find your favorite website by name instead of typing an IP address into your browser. This guide shows you how to configure a Master DNS system and one client.

Here are system details for the example used in this article:

dns01.fedora.local     (192.168.1.160 ) - Master DNS server
client.fedora.local    (192.168.1.136 ) - Client 

DNS server configuration

Install the bind packages using sudo:

$ sudo dnf install bind bind-utils -y

The /etc/named.conf configuration file is provided by the bind package to allow you to configure the DNS server.

Edit the /etc/named.conf file:

sudo vi /etc/named.conf

Look for the following line:

listen-on port 53 { 127.0.0.1; };

Add the IP address of your Master DNS server as follows:

listen-on port 53 { 127.0.0.1; 192.168.1.160; };

Look for the next line:

allow-query  { localhost; };

Add your local network range. The example system uses IP addresses in the 192.168.1.X range. This is specified as follows:

allow-query  { localhost; 192.168.1.0/24; };

Specify a forward and reverse zone. Zone files are simply text files that have the DNS information, such as IP addresses and host-names, on your system. The forward zone file makes it possible for the translation of a host-name to its IP address. The reverse zone file does the opposite. It allows a remote system to translate an IP address to the host name.

Look for the following line at the bottom of the /etc/named.conf file:

include "/etc/named.rfc1912.zones";

Here, you’ll specify the zone file information directly above that line as follows:

zone "dns01.fedora.local" IN {
type master;
file "forward.fedora.local";
allow-update { none; };
};

zone "1.168.192.in-addr.arpa" IN {
type master;
file "reverse.fedora.local";
allow-update { none; };
};

The forward.fedora.local and the file reverse.fedora.local are just the names of the zone files you will be creating. They can be called anything you like.

Save and exit.

Create the zone files

Create the forward and reverse zone files you specified in the /etc/named.conf file:

$ sudo vi /var/named/forward.fedora.local

Add the following lines:

$TTL 86400
@   IN  SOA     dns01.fedora.local. root.fedora.local. (
        2011071001  ;Serial
        3600        ;Refresh
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
)
@       IN  NS          dns01.fedora.local.
@       IN  A           192.168.1.160
dns01           IN  A   192.168.1.160
client          IN  A   192.168.1.136

Everything in bold is specific to your environment. Save the file and exit. Next, edit the reverse.fedora.local file:

$ sudo vi /var/named/reverse.fedora.local

Add the following lines:

$TTL 86400
@   IN  SOA     dns01.fedora.local. root.fedora.local. (
        2011071001  ;Serial
        3600        ;Refresh
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
)
@       IN  NS          dns01.fedora.local.
@       IN  PTR         fedora.local.
dns01           IN  A   192.168.1.160
client          IN  A   192.168.1.136
160     IN  PTR         dns01.fedora.local.
136     IN  PTR         client.fedora.local.

Everything in bold is also specific to your environment. Save the file and exit.

You’ll also need to configure SELinux and add the correct ownership for the configuration files.

sudo chgrp named -R /var/named
sudo chown -v root:named /etc/named.conf
sudo restorecon -rv /var/named
sudo restorecon /etc/named.conf

Configure the firewall:

sudo firewall-cmd --add-service=dns --perm
sudo firewall-cmd --reload

Check the configuration for any syntax errors

sudo named-checkconf /etc/named.conf

Your configuration is valid if no output or errors are returned.

Check the forward and reverse zone files.

$ sudo named-checkzone forward.fedora.local /var/named/forward.fedora.local

$ sudo named-checkzone reverse.fedora.local /var/named/reverse.fedora.local

You should see a response of OK:

zone forward.fedora.local/IN: loaded serial 2011071001
OK

zone reverse.fedora.local/IN: loaded serial 2011071001
OK

Enable and start the DNS service

$ sudo systemctl enable named
$ sudo systemctl start named

Configuring the resolv.conf file

Edit the /etc/resolv.conf file:

$ sudo vi /etc/resolv.conf

Look for your current name server line or lines. On the example system, a cable modem/router is serving as the name server and so it currently looks like this:

nameserver 192.168.1.1

This needs to be changed to the IP address of the Master DNS server:

nameserver 192.168.1.160

Save your changes and exit.

Unfortunately there is one caveat to be aware of. NetworkManager overwrites the /etc/resolv.conf file if the system is rebooted or networking gets restarted. This means you will lose all of the changes that you made.

To prevent this from happening, make /etc/resolv.conf immutable:

$ sudo chattr +i /etc/resolv.conf 

If you want to set it back and allow it to be overwritten again:

$ sudo chattr -i /etc/resolv.conf

Testing the DNS server

$ dig fedoramagazine.org
; <<>> DiG 9.11.13-RedHat-9.11.13-2.fc30 <<>> fedoramagazine.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8391
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ; COOKIE: c7350d07f8efaa1286c670ab5e13482d600f82274871195a (good)
 ;; QUESTION SECTION:
 ;fedoramagazine.org.        IN  A

;; ANSWER SECTION:
 fedoramagazine.org.    50  IN  A   35.197.52.145

;; AUTHORITY SECTION:
 fedoramagazine.org.    86150   IN  NS  ns05.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns02.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns04.fedoraproject.org.

;; ADDITIONAL SECTION:
 ns02.fedoraproject.org.    86150   IN  A   152.19.134.139
 ns04.fedoraproject.org.    86150   IN  A   209.132.181.17
 ns05.fedoraproject.org.    86150   IN  A   85.236.55.10
 ns02.fedoraproject.org.    86150   IN  AAAA    2610:28:3090:3001:dead:beef:cafe:fed5
 ns05.fedoraproject.org.    86150   IN  AAAA    2001:4178:2:1269:dead:beef:cafe:fed5

 ;; Query time: 830 msec
 ;; SERVER: 192.168.1.160#53(192.168.1.160)
 ;; WHEN: Mon Jan 06 08:46:05 CST 2020
 ;; MSG SIZE  rcvd: 266

There are a few things to look at to verify that the DNS server is working correctly. Obviously getting the results back are important, but that by itself doesn’t mean the DNS server is actually doing the work.

The QUERY, ANSWER, and AUTHORITY fields at the top should show non-zero as it in does in our example:

;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

And the SERVER field should have the IP address of your DNS server:

;; SERVER: 192.168.1.160#53(192.168.1.160)

In case this is the first time you’ve run the dig command, notice how it took 830 milliseconds for the query to complete:

;; Query time: 830 msec

If you run it again, the query will run much quicker:

$ dig fedoramagazine.org 
;; Query time: 0 msec
;; SERVER: 192.168.1.160#53(192.168.1.160)

Client configuration

The client configuration will be a lot simpler.

Install the bind utilities:

$ sudo dnf install bind-utils -y

Edit the /etc/resolv.conf file and configure the Master DNS as the only name server:

$ sudo vi /etc/resolv.conf

This is how it should look:

nameserver 192.168.1.160

Save your changes and exit. Then, make the /etc/resolv.conf file immutable to prevent it from be overwritten and going back to its default settings:

$ sudo chattr +i /etc/resolv.conf

Testing the client

You should get the same results as you did from the DNS server:

$ dig fedoramagazine.org
; <<>> DiG 9.11.13-RedHat-9.11.13-2.fc30 <<>> fedoramagazine.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8391
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ; COOKIE: c7350d07f8efaa1286c670ab5e13482d600f82274871195a (good)
 ;; QUESTION SECTION:
 ;fedoramagazine.org.        IN  A

;; ANSWER SECTION:
 fedoramagazine.org.    50  IN  A   35.197.52.145

;; AUTHORITY SECTION:
 fedoramagazine.org.    86150   IN  NS  ns05.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns02.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns04.fedoraproject.org.

;; ADDITIONAL SECTION:
 ns02.fedoraproject.org.    86150   IN  A   152.19.134.139
 ns04.fedoraproject.org.    86150   IN  A   209.132.181.17
 ns05.fedoraproject.org.    86150   IN  A   85.236.55.10
 ns02.fedoraproject.org.    86150   IN  AAAA    2610:28:3090:3001:dead:beef:cafe:fed5
 ns05.fedoraproject.org.    86150   IN  AAAA    2001:4178:2:1269:dead:beef:cafe:fed5

 ;; Query time: 1 msec
 ;; SERVER: 192.168.1.160#53(192.168.1.160)
 ;; WHEN: Mon Jan 06 08:46:05 CST 2020
 ;; MSG SIZE  rcvd: 266

Make sure the SERVER output has the IP Address of your DNS server.

Your DNS server is now ready to use and all requests from the client should be going through your DNS server now!

Sunday, 12 January

17:00

11:11

Helping mitigate the Citrix NetScaler CVE with Cloudflare Access [The Cloudflare Blog]

Helping mitigate the Citrix NetScaler CVE with Cloudflare Access

Yesterday, Citrix sent an updated notification to customers warning of a vulnerability in their Application Delivery Controller (ADC) product. If exploited, malicious attackers can bypass the login page of the administrator portal, without authentication, to perform arbitrary code execution.

No patch is available yet. Citrix expects to have a fix for certain versions on January 20 and others at the end of the month.

In the interim, Citrix has asked customers to attempt to mitigate the vulnerability. The recommended steps involve running a number of commands from an administrator command line interface.

The vulnerability relied on by attackers requires that they first be able to reach a login portal hosted by the ADC. Cloudflare can help teams secure that page and the resources protected by the ADC. Teams can place the login page, as well as the administration interface, behind Cloudflare Access’ identity proxy to prevent unauthenticated users from making requests to the portal.

Exploiting URL paths

Citrix ADC, also known as Citrix NetScaler, is an application delivery controller that provides Layer 3 through Layer 7 security for applications and APIs. Once deployed, administrators manage the installation of the ADC through a portal available at a dedicated URL on a hostname they control.

Users and administrators can reach the ADC interface over multiple protocols, but it appears that the vulnerability stems from HTTP paths that contain “/vpn/../vpns/” in the path via the VPN or AAA endpoints, from which a directory traversal exploit is possible.

The suggested mitigation steps ask customers to run commands which enforce new responder policies for the ADC interface. Those policies return 403s when certain paths are requested, blocking unauthenticated users from reaching directories that sit behind the authentication flow.

Protecting administrator portals with Cloudflare Access

To exploit this vulnerability, attackers must first be able to reach a login portal hosted by the ADC. As part of a defense-in-depth strategy, Cloudflare Access can prevent attackers from ever reaching the panel over HTTP or SSH.

Cloudflare Access, part of Cloudflare for Teams, protects internally managed resources by checking each request for identity and permission. When administrators secure an application behind Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

To defend against attackers addressing IPs directly, Argo Tunnel can help secure the interface and force outbound requests through Cloudflare Access. With Argo Tunnel, and firewall rules preventing inbound traffic, no request can reach those IPs without first hitting Cloudflare, where Access can evaluate the request for authentication.

Administrators then build rules to decide who should authenticate to and reach the tools protected by Access. Whether those resources are virtual machines powering business operations or internal web applications, like Jira or iManage, when a user needs to connect, they pass through Cloudflare first.

When users need to connect to the tools behind Access, they are prompted to authenticate with their team’s SSO and, if valid, instantly connected to the application without being slowed down. Internally managed apps suddenly feel like SaaS products, and the login experience is seamless and familiar.

Behind the scenes, every request made to those internal tools hits Cloudflare first where we enforce identity-based policies. Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Helping mitigate the Citrix NetScaler CVE with Cloudflare Access

Cloudflare Access can also be bundled with the Cloudflare WAF, and WAF rules can be applied to guard against this as well. Adding Cloudflare Access, the Cloudflare WAF, and the mitigation commands from Citrix together provide layers of security while a patch is in development.

How to get started

We recommend that users of the Citrix ADC follow the mitigation steps recommended by Citrix. Cloudflare Access adds another layer of security by enforcing identity-based authentication for requests made over HTTP and SSH to the ADC interface. Together, these steps can help form a defense-in-depth strategy until a patch is released by Citrix.

To get started, Citrix ADC users can place their ADC interface and exposed endpoints behind a bastion host secured by Cloudflare Access. On that bastion host, administrators can use Cloudflare Argo Tunnel to open outbound-only connections to Cloudflare through which HTTP and SSH requests can be proxied.

Once deployed, users of the login portal can connect to the protected hostname. Cloudflare Access will prompt them to login with their identity provider and Cloudflare will validate the user against the rules created to control who can reach the interface. If authenticated and allowed, the user will be able to connect. No other requests will be able to reach the interface over HTTP or SSH without authentication.
The first five seats of Cloudflare Access are free. Teams can sign up here to get started.

Thursday, 09 January

Wednesday, 08 January

10:08

Accelerating UDP packet transmission for QUIC [The Cloudflare Blog]

Accelerating UDP packet transmission for QUIC

This was originally published on Perf Planet's 2019 Web Performance Calendar.

QUIC, the new Internet transport protocol designed to accelerate HTTP traffic, is delivered on top of UDP datagrams, to ease deployment and avoid interference from network appliances that drop packets from unknown protocols. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.

But while a lot of work has gone into optimizing TCP implementations as much as possible over the years, including building offloading capabilities in both software (like in operating systems) and hardware (like in network interfaces), UDP hasn't received quite as much attention as TCP, which puts QUIC at a disadvantage. In this post we'll look at a few tricks that help mitigate this disadvantage for UDP, and by association QUIC.

For the purpose of this blog post we will only be concentrating on measuring throughput of QUIC connections, which, while necessary, is not enough to paint an accurate overall picture of the performance of the QUIC protocol (or its implementations) as a whole.

Test Environment

The client used in the measurements is h2load, built with QUIC and HTTP/3 support, while the server is NGINX, built with the open-source QUIC and HTTP/3 module provided by Cloudflare which is based on quiche (github.com/cloudflare/quiche), Cloudflare's own open-source implementation of QUIC and HTTP/3.

The client and server are run on the same host (my laptop) running Linux 5.3, so the numbers don’t necessarily reflect what one would see in a production environment over a real network, but it should still be interesting to see how much of an impact each of the techniques have.

Baseline

Currently the code that implements QUIC in NGINX uses the sendmsg() system call to send a single UDP packet at a time.

ssize_t sendmsg(int sockfd, const struct msghdr *msg,
    int flags);

The struct msghdr carries a struct iovec which can in turn carry multiple buffers. However, all of the buffers within a single iovec will be merged together into a single UDP datagram during transmission. The kernel will then take care of encapsulating the buffer in a UDP packet and sending it over the wire.

Accelerating UDP packet transmission for QUIC

The throughput of this particular implementation tops out at around 80-90 MB/s, as measured by h2load when performing 10 sequential requests for a 100 MB resource.

Accelerating UDP packet transmission for QUIC

sendmmsg()

Due to the fact that sendmsg() only sends a single UDP packet at a time, it needs to be invoked quite a lot in order to transmit all of the QUIC packets required to deliver the requested resources, as illustrated by the following bpftrace command:

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 904539

Each of those system calls causes an expensive context switch between the application and the kernel, thus impacting throughput.

But while sendmsg() only transmits a single UDP packet at a time for each invocation, its close cousin sendmmsg() (note the additional “m” in the name) is able to batch multiple packets per system call:

int sendmmsg(int sockfd, struct mmsghdr *msgvec,
    unsigned int vlen, int flags);

Multiple struct mmsghdr structures can be passed to the kernel as an array, each in turn carrying a single struct msghdr with its own struct iovec , with each element in the msgvec array representing a single UDP datagram.

Accelerating UDP packet transmission for QUIC

Let's see what happens when NGINX is updated to use sendmmsg() to send QUIC packets:

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 2437
@[tracepoint:syscalls:sys_enter_sendmmsg]: 15676

The number of system calls went down dramatically, which translates into an increase in throughput, though not quite as big as the decrease in syscalls:

Accelerating UDP packet transmission for QUIC

UDP segmentation offload

With sendmsg() as well as sendmmsg(), the application is responsible for separating each QUIC packet into its own buffer in order for the kernel to be able to transmit it. While the implementation in NGINX uses static buffers to implement this, so there is no overhead in allocating them, all of these buffers need to be traversed by the kernel during transmission, which can add significant overhead.

Linux supports a feature, Generic Segmentation Offload (GSO), which allows the application to pass a single "super buffer" to the kernel, which will then take care of segmenting it into smaller packets. The kernel will try to postpone the segmentation as much as possible to reduce the overhead of traversing outgoing buffers (some NICs even support hardware segmentation, but it was not tested in this experiment due to lack of capable hardware). Originally GSO was only supported for TCP, but support for UDP GSO was recently added as well, in Linux 4.18.

This feature can be controlled using the UDP_SEGMENT socket option:

setsockopt(fd, SOL_UDP, UDP_SEGMENT, &gso_size, sizeof(gso_size)))

As well as via ancillary data, to control segmentation for each sendmsg() call:

cm = CMSG_FIRSTHDR(&msg);
cm->cmsg_level = SOL_UDP;
cm->cmsg_type = UDP_SEGMENT;
cm->cmsg_len = CMSG_LEN(sizeof(uint16_t));
*((uint16_t *) CMSG_DATA(cm)) = gso_size;

Where gso_size is the size of each segment that form the "super buffer" passed to the kernel from the application. Once configured, the application can provide one contiguous large buffer containing a number of packets of gso_size length (as well as a final smaller packet), that will then be segmented by the kernel (or the NIC if hardware segmentation offloading is supported and enabled).

Accelerating UDP packet transmission for QUIC

Up to 64 segments can be batched with the UDP_SEGMENT option.

GSO with plain sendmsg() already delivers a significant improvement:

Accelerating UDP packet transmission for QUIC

And indeed the number of syscalls also went down significantly, compared to plain sendmsg() :

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 18824

GSO can also be combined with sendmmsg() to deliver an even bigger improvement. The idea being that each struct msghdr can be segmented in the kernel by setting the UDP_SEGMENT option using ancillary data, allowing an application to pass multiple “super buffers”, each carrying up to 64 segments, to the kernel in a single system call.

The improvement is again fairly significant:

Accelerating UDP packet transmission for QUIC

Evolving from AFAP

Transmitting packets as fast as possible is easy to reason about, and there's much fun to be had in optimizing applications for that, but in practice this is not always the best strategy when optimizing protocols for the Internet

Bursty traffic is more likely to cause or be affected by congestion on any given network path, which will inevitably defeat any optimization implemented to increase transmission rates.

Packet pacing is an effective technique to squeeze out more performance from a network flow. The idea being that adding a short delay between each outgoing packet will smooth out bursty traffic and reduce the chance of congestion, and packet loss. For TCP this was originally implemented in Linux via the fq packet scheduler, and later by the BBR congestion control algorithm implementation, which implements its own pacer.

Accelerating UDP packet transmission for QUIC

Due to the nature of current QUIC implementations, which reside entirely in user-space, pacing of QUIC packets conflicts with any of the techniques explored in this post, because pacing each packet separately during transmission will prevent any batching on the application side, and in turn batching will prevent pacing, as batched packets will be transmitted as fast as possible once received by the kernel.

However Linux provides some facilities to offload the pacing to the kernel and give back some control to the application:

  • SO_MAX_PACING_RATE: an application can define this socket option to instruct the fq packet scheduler to pace outgoing packets up to the given rate. This works for UDP sockets as well, but it is yet to be seen how this can be integrated with QUIC, as a single UDP socket can be used for multiple QUIC connections (unlike TCP, where each connection has its own socket). In addition, this is not very flexible, and might not be ideal when implementing the BBR pacer.
  • SO_TXTIME / SCM_TXTIME: an application can use these options to schedule transmission of specific packets at specific times, essentially instructing fq to delay packets until the provided timestamp is reached. This gives the application a lot more control, and can be easily integrated into sendmsg() as well as sendmmsg(). But it does not yet support specifying different times for each packet when GSO is used, as there is no way to define multiple timestamps for packets that need to be segmented (each segmented packet essentially ends up being sent at the same time anyway).

While the performance gains achieved by using the techniques illustrated here are fairly significant, there are still open questions around how any of this will work with pacing, so more experimentation is required.

08:00

Prototyping optimizations with Cloudflare Workers and WebPageTest [The Cloudflare Blog]

Prototyping optimizations with Cloudflare Workers and WebPageTest

This article was originally published as part of  Perf Planet's 2019 Web Performance Calendar.

Have you ever wanted to quickly test a new performance idea, or see if the latest performance wisdom is beneficial to your site? As web performance appears to be a stochastic process, it is really important to be able to iterate quickly and review the effects of different experiments. The challenge is to be able to arbitrarily change requests and responses without the overhead of setting up another internet facing server. This can be straightforward to implement by combining two of my favourite technologies : WebPageTest and Cloudflare Workers. Pat Meenan sums this up with the following slide from a recent getting the most of WebPageTest presentation:

Prototyping optimizations with Cloudflare Workers and WebPageTest

So what is Cloudflare Workers and why is it ideally suited to easy prototyping of optimizations?

Cloudflare Workers

From the documentation :

Cloudflare Workers provides a lightweight JavaScript execution environment that allows developers to augment existing applications or create entirely new ones without configuring or maintaining infrastructure.A Cloudflare Worker is a programmable proxy which brings the simplicity and flexibility of the Service Workers event-based fetch API from the browser to the edge. This allows a worker to intercept and modify requests and responses.

Prototyping optimizations with Cloudflare Workers and WebPageTest

With the Service Worker API you can add an EventListener to any fetch event that is routed through the worker script and modify the request to come from a different origin.

Cloudflare Workers also provides a streaming HTMLRewriter to enable on the fly modification of HTML as it passes through the worker. The streaming nature of this parser ensures latency is minimised as the entire HTML document does not have to be buffered before rewriting can happen.

Setting up a worker

It is really quick and easy to sign up for a free subdomain at workers.dev which provides you with 100,000 free requests per day. There is a quick-start guide available here.To be able to run the examples in this post you will need to install Wrangler, the CLI tool for deploying workers. Once Wrangler is installed run the following command to download the example worker project:    

wrangler generate wpt-proxy https://github.com/xtuc/WebPageTest-proxy

You will then need to update the wrangler.toml with your account_id, which can be found in the dashboard in the right sidebar. Then configure an API key with the command:

wrangler config

Finally, you can publish the worker with:  

wrangler publish

At this the point, the worker will be active at

https://wpt-proxy.<your-subdomain>.workers.dev.

WebPageTest OverrideHost  

Now that your worker is configured, the next step is to configure WebPageTest to redirect requests through the worker. WebPageTest has a feature where it can re-point arbitrary origins to a different domain. To access the feature in WebPageTest, you need to use the WebPageTest scripting language "overrideHost" command, as shown:

Prototyping optimizations with Cloudflare Workers and WebPageTest

This example will redirect all network requests to www.bbc.co.uk to wpt-proxy.prf.workers.dev instead. WebPageTest also adds an x-host header to each redirected request so that the destination can determine for which host the request was originally intended:    

x-host: www.bbc.co.uk

The script can process multiple overrideHost commands to override multiple different origins. If HTTPS is used, WebPageTest can use HTTP/2 and benefit from connection coalescing:  

overrideHost www.bbc.co.uk wpt-proxy.prf.workers.dev    
overrideHost nav.files.bbci.co.uk wpt-proxy.prf.workers.dev
navigate https://www.bbc.co.uk

 It also supports wildcards:  

overrideHost *bbc.co.uk wpt-proxy.prf.workers.dev    
navigate https://www.bbc.co.uk

There are a few special strings that can be used in a script when bulk testing, so a single script can be re-used across multiple URLs:

  • %URL% - Replaces with the URL of the current test
  • %HOST% - Replaces with the hostname of the URL of the current test
  • %HOSTR% - Replaces with the hostname of the final URL in case the test URL does a redirect.

A more generic script would look like this:    

overrideHost %HOSTR% wpt-proxy.prf.workers.dev    
navigate %URL% 

Basic worker

In the base example below, the worker listens for the fetch event, looks for the x-host header that WebPageTest has set and responds by fetching the content from the orginal url:

/* 
* Handle all requests. 
* Proxy requests with an x-host header and return 403
* for everything else
*/

addEventListener("fetch", event => {    
   const host = event.request.headers.get('x-host');        
   if (host) {          
      const url = new URL(event.request.url);          
      const originUrl = url.protocol + '//' + host + url.pathname + url.search;             
      let init = {             
         method: event.request.method,             
         redirect: "manual",             
         headers: [...event.request.headers]          
      };          
      event.respondWith(fetch(originUrl, init));        
   } 
   else {           
     const response = new Response('x-Host headers missing', {status: 403});                
     event.respondWith(response);        
   }    
});

The source code can be found here and instructions to download and deploy this worker are described in the earlier section.

So what happens if we point all the domains on the BBC website through this worker, using the following config:  

overrideHost    *bbci.co.uk wpt.prf.workers.dev    
overrideHost    *bbc.co.uk  wpt.prf.workers.dev    
navigate    https://www.bbc.co.uk

configured to a 3G Fast setting from a UK test location.

Prototyping optimizations with Cloudflare Workers and WebPageTest
Comparison of BBC website if when using a single connection. 

Before After
Prototyping optimizations with Cloudflare Workers and WebPageTest Prototyping optimizations with Cloudflare Workers and WebPageTest

The potential performance improvement of loading a page over a single connection, eliminating the additional DNS lookup, TCP connection and TLS handshakes, can be seen  by comparing the filmstrips and waterfalls. There are several reasons why you may not want or be able to move everything to a single domain, but at least it is now easy to see what the performance difference would be.  

HTMLRewriter

With the HTMLRewriter, it is possible to change the HTML response as it passes through the worker. A jQuery-like syntax provides CSS-selector matching and a standard set of DOM mutation methods. For instance you could rewrite your page to measure the effects of different preload/prefetch strategies, review the performance savings of removing or using different third-party scripts, or you could stock-take the HEAD of your document. One piece of performance advice is to self-host some third-party scripts. This example script invokes the HTMLRewriter to listen for a script tag with a src attribute. If the script is from a proxiable domain the src is rewritten to be first-party, with a specific path prefix.

async function rewritePage(request) {  
  const response = await fetch(request);    
    return new HTMLRewriter()      
      .on("script[src]", {        
        element: el => {          
          let src = el.getAttribute("src");          
          if (PROXIED_URL_PREFIXES_RE.test(src)) {
            el.setAttribute("src", createProxiedScriptUrl(src));
          }           
        }    
    })    
    .transform(response);
}

Subsequently, when the browser makes a request with the specific prefix, the worker fetches the asset from the original URL. This example can be downloaded with this command:    

wrangler generate test https://github.com/xtuc/rewrite-3d-party.git

Request Mangling

As well as rewriting content, it is also possible to change or delay a request. Below is an example of how to randomly add a delay of a second to a request:

addEventListener("fetch", event => {    
    const host = event.request.headers.get('x-host');    
    if (host) { 
//....     
    // Add the delay if necessary     
    if (Math.random() * 100 < DELAY_PERCENT) {       
      await new Promise(resolve => setTimeout(resolve, DELAY_MS));     
    }    
    event.respondWith(fetch(originUrl, init));
//...
}

HTTP/2 prioritization

What if you want to see what the effect of changing the HTTP/2 prioritization of assets would make to your website? Cloudflare Workers provide custom http2 prioritization schemes that can be applied by setting a custom header on the response. The cf-priority header is defined as <priority>/<concurrency> so adding:    

response.headers.set('cf-priority', “30/0”);    

would set the priority of that response to 30 with a concurrency of 0 for the given response. Similarly, “30/1” would set concurrency to 1 and “30/n” would set concurrency to n. With this flexibility, you can prioritize the bytes that are important for your website or run a bulk test to prove that your new  prioritization scheme is better than any of the existing browser implementations.

Summary

A major barrier to understanding and innovation, is the amount of time is takes to get feedback. Having a quick and easy framework, to try out a new idea and comprehend the impact, is key. I hope this post has convinced you that combining WebPageTest and Cloudflare Workers is an easy solution to this problem and is indeed magic

02:00

How to setup multiple monitors in sway [Fedora Magazine]

Sway is a tiling Wayland compositor which has mostly the same features, look and workflow as the i3 X11 window manager. Because Sway uses Wayland instead of X11, the tools to setup X11 don’t always work in sway. This includes tools like xrandr, which are used in X11 window managers or desktops to setup monitors. This is why monitors have to be setup by editing the sway config file, and that’s what this article is about.

Getting your monitor ID’s

First, you have to get the names sway uses to refer to your monitors. You can do this by running:

$ swaymsg -t get_outputs

You will get information about all of your monitors, every monitor separated by an empty line.

You have to look for the first line of every section, and for what’s after “Output”. For example, when you see a line like “Output DVI-D-1 ‘Philips Consumer Electronics Company’”, the output ID is “DVI-D-1”. Note these ID’s and which physical monitors they belong to.

Editing the config file

If you haven’t edited the Sway config file before, you have to copy it to your home directory by running this command:

cp -r /etc/sway/config ~/.config/sway/config

Now the default config file is located in ~/.config/sway and called “config”. You can edit it using any text editor.

Now you have to do a little bit of math. Imagine a grid with the origin in the top left corner. The units of the X and Y coordinates are pixels. The Y axis is inverted. This means that if you, for example, start at the origin and you move 100 pixels to the right and 80 pixels down, your coordinates will be (100, 80).

You have to calculate where your displays are going to end up on this grid. The locations of the displays are specified with the top left pixel. For example, if we want to have a monitor with name HDMI1 and a resolution of 1920×1080, and to the right of it a laptop monitor with name eDP1 and a resolution of 1600×900, you have to type this in your config file:

output HDMI1 pos 0 0
output eDP1 pos 1920 0

You can also specify the resolutions manually by using the res option: 

output HDMI1 pos 0 0 res 1920x1080
output eDP1 pos 1920 0 res 1600x900

Binding workspaces to monitors

Using sway with multiple monitors can be a little bit tricky with workspace management. Luckily, you can bind workspaces to a specific monitor, so you can easily switch to that monitor and use your displays more efficiently. This can simply be done by the workspace command in your config file. For example, if you want to bind workspace 1 and 2 to monitor DVI-D-1 and workspace 8 and 9 to monitor HDMI-A-1, you can do that by using:

workspace 1 output DVI-D-1
workspace 2 output DVI-D-1
workspace 8 output HDMI-A-1
workspace 9 output HDMI-A-1

That’s it! These are the basics of multi monitor setup in sway. A more detailed guide can be found at https://github.com/swaywm/sway/wiki#Multihead

Tuesday, 07 January

07:00

Introducing Cloudflare for Teams [The Cloudflare Blog]

Introducing Cloudflare for Teams

Ten years ago, when Cloudflare was created, the Internet was a place that people visited. People still talked about ‘surfing the web’ and the iPhone was less than two years old, but on July 4, 2009 large scale DDoS attacks were launched against websites in the US and South Korea.

Those attacks highlighted how fragile the Internet was and how all of us were becoming dependent on access to the web as part of our daily lives.

Fast forward ten years and the speed, reliability and safety of the Internet is paramount as our private and work lives depend on it.

We started Cloudflare to solve one half of every IT organization's challenge: how do you ensure the resources and infrastructure that you expose to the Internet are safe from attack, fast, and reliable. We saw that the world was moving away from hardware and software to solve these problems and instead wanted a scalable service that would work around the world.

To deliver that, we built one of the world's largest networks. Today our network spans more than 200 cities worldwide and is within milliseconds of nearly everyone connected to the Internet. We have built the capacity to stand up to nation-state scale cyberattacks and a threat intelligence system powered by the immense amount of Internet traffic that we see.

Introducing Cloudflare for Teams

Today we're expanding Cloudflare's product offerings to solve the other half of every IT organization's challenge: ensuring the people and teams within an organization can access the tools they need to do their job and are safe from malware and other online threats.

The speed, reliability, and protection we’ve brought to public infrastructure is extended today to everything your team does on the Internet.

In addition to protecting an organization's infrastructure, IT organizations are charged with ensuring that employees of an organization can access the tools they need safely. Traditionally, these problems would be solved by hardware products like VPNs and Firewalls. VPNs let authorized users access the tools they needed and Firewalls kept malware out.

Castle and Moat

Introducing Cloudflare for Teams

The dominant model was the idea of a castle and a moat. You put all your valuable assets inside the castle. Your Firewall created the moat around the castle to keep anything malicious out. When you needed to let someone in, a VPN acted as the drawbridge over the moat.

This is still the model most businesses use today, but it's showing its age. The first challenge is that if an attacker is able to find its way over the moat and into the castle then it can cause significant damage. Unfortunately, few weeks go by without reading a news story about how an organization had significant data compromised because an employee fell for a phishing email, or a contractor was compromised, or someone was able to sneak into an office and plug in a rogue device.

The second challenge of the model is the rise of cloud and SaaS. Increasingly an organization's resources aren't in the just one castle anymore, but instead in different public cloud and SaaS vendors.

Services like Box, for instance, provide better storage and collaboration tools than most organizations could ever hope to build and manage themselves. But there's literally nowhere you can ship a hardware box to Box in order to build your own moat around their SaaS castle. Box provides some great security tools themselves, but they are different from the tools provided by every other SaaS and public cloud vendor. Where IT organizations used to try to have a single pane of glass with a complex mess of hardware to see who was getting stopped by their moats and who was crossing their drawbridges, SaaS and cloud make that visibility increasingly difficult.

The third challenge to the traditional castle and moat strategy of IT is the rise of mobile. Where once upon a time your employees would all show up to work in your castle, now people are working around the world. Requiring everyone to login to a limited number of central VPNs becomes obviously absurd when you picture it as villagers having to sprint back from wherever they are across a drawbridge whenever they want to get work done. It's no wonder VPN support is one of the top IT organization tickets and likely always will be for organizations that maintain a castle and moat approach.

Introducing Cloudflare for Teams

But it's worse than that. Mobile has also introduced a culture where employees bring their own devices to work. Or, even if on a company-managed device, work from the road or home — beyond the protected walls of the castle and without the security provided by a moat.

If you'd looked at how we managed our own IT systems at Cloudflare four years ago, you'd have seen us following this same model. We used firewalls to keep threats out and required every employee to login through our VPN to get their work done. Personally, as someone who travels extensively for my job, it was especially painful.

Regularly, someone would send me a link to an internal wiki article asking for my input. I'd almost certainly be working from my mobile phone in the back of a cab running between meetings. I'd try and access the link and be prompted to login to our VPN in San Francisco. That's when the frustration would start.

Corporate mobile VPN clients, in my experience, all seem to be powered by some 100-sided die that only will allow you to connect if the number of miles you are from your home office is less than 25 times whatever number is rolled. Much frustration, and several IT tickets later, with a little luck I may be able to connect. And, even then, the experience was horribly slow and unreliable.

When we audited our own system, we found that the frustration with the process had caused multiple teams to create work arounds that were, effectively, unauthorized drawbridges over our carefully constructed moat. And, as we increasingly adopted SaaS tools like Salesforce and Workday, we lost much visibility into how these tools were being used.

Around the same time we were realizing the traditional approach to IT security was untenable for an organization like Cloudflare, Google published their paper titled "BeyondCorp: A New Approach to Enterprise Security." The core idea was that a company's intranet should be no more trusted than the Internet. And, rather than the perimeter being enforced by a singular moat, instead each application and data source should authenticate the individual and device each time it is accessed.

The BeyondCorp idea, which has come to be known as a ZeroTrust model for IT security, was influential for how we thought about our own systems. Powerfully, because Cloudflare had a flexible global network, we were able to use it both to enforce policies as our team accessed tools as well as to protect ourselves from malware as we did our jobs.

Cloudflare for Teams

Today, we're excited to announce Cloudflare for Teams™: the suite of tools we built to protect ourselves, now available to help any IT organization, from the smallest to the largest.

Cloudflare for Teams is built around two complementary products: Access and Gateway. Cloudflare Access™ is the modern VPN — a way to ensure your team members get fast access to the resources they need to do their job while keeping threats out. Cloudflare Gateway™ is the modern Next Generation Firewall — a way to ensure that your team members are protected from malware and follow your organization's policies wherever they go online.

Powerfully, both Cloudflare Access and Cloudflare Gateway are built atop the existing Cloudflare network. That means they are fast, reliable, scalable to the largest organizations, DDoS resistant, and located everywhere your team members are today and wherever they may travel. Have a senior executive going on a photo safari to see giraffes in Kenya, gorillas in Rwanda, and lemurs in Madagascar — don't worry, we have Cloudflare data centers in all those countries (and many more) and they all support Cloudflare for Teams.

Introducing Cloudflare for Teams

All Cloudflare for Teams products are informed by the threat intelligence we see across all of Cloudflare's products. We see such a large diversity of Internet traffic that we often see new threats and malware before anyone else. We've supplemented our own proprietary data with additional data sources from leading security vendors, ensuring Cloudflare for Teams provides a broad set of protections against malware and other online threats.

Moreover, because Cloudflare for Teams runs atop the same network we built for our infrastructure protection products, we can deliver them very efficiently. That means that we can offer these products to our customers at extremely competitive prices. Our goal is to make the return on investment (ROI) for all Cloudflare for Teams customers nothing short of a no brainer. If you’re considering another solution, contact us before you decide.

Both Cloudflare Access and Cloudflare Gateway also build off products we've launched and battle tested already. For example, Gateway builds, in part, off our 1.1.1.1 Public DNS resolver. Today, more than 40 million people trust 1.1.1.1 as the fastest public DNS resolver globally. By adding malware scanning, we were able to create our entry-level Cloudflare Gateway product.

Cloudflare Access and Cloudflare Gateway build off our WARP and WARP+ products. We intentionally built a consumer mobile VPN service because we knew it would be hard. The millions of WARP and WARP+ users who have put the product through its paces have ensured that it's ready for the enterprise. That we have 4.5 stars across more than 200,000 ratings, just on iOS, is a testament of how reliable the underlying WARP and WARP+ engines have become. Compare that with the ratings of any corporate mobile VPN client, which are unsurprisingly abysmal.

We’ve partnered with some incredible organizations to create the ecosystem around Cloudflare for Teams. These include endpoint security solutions including VMWare Carbon Black, Malwarebytes, and Tanium. SEIM and analytics solutions including Datadog, Sumo Logic, and Splunk. Identity platforms including Okta, OneLogin, and Ping Identity. Feedback from these partners and more is at the end of this post.

If you’re curious about more of the technical details about Cloudflare for Teams, I encourage you to read Sam Rhea’s post.

Serving Everyone

Cloudflare has always believed in the power of serving everyone. That’s why we’ve offered a free version of Cloudflare for Infrastructure since we launched in 2010. That belief doesn’t change with our launch of Cloudflare for Teams. For both Cloudflare Access and Cloudflare Gateway, there will be free versions to protect individuals, home networks, and small businesses. We remember what it was like to be a startup and believe that everyone deserves to be safe online, regardless of their budget.

With both Cloudflare Access and Gateway, the products are segmented along a Good, Better, Best framework. That breaks out into Access Basic, Access Pro, and Access Enterprise. You can see the features available with each tier in the table below, including Access Enterprise features that will roll out over the coming months.

Introducing Cloudflare for Teams

We wanted a similar Good, Better, Best framework for Cloudflare Gateway. Gateway Basic can be provisioned in minutes through a simple change to your network’s recursive DNS settings. Once in place, network administrators can set rules on what domains should be allowed and filtered on the network. Cloudflare Gateway is informed both by the malware data gathered from our global sensor network as well as a rich corpus of domain categorization, allowing network operators to set whatever policy makes sense for them. Gateway Basic leverages the speed of 1.1.1.1 with granular network controls.

Gateway Pro, which we’re announcing today and you can sign up to beta test as its features roll out over the coming months, extends the DNS-provisioned protection to a full proxy. Gateway Pro can be provisioned via the WARP client — which we are extending beyond iOS and Android mobile devices to also support Windows, MacOS, and Linux — or network policies including MDM-provisioned proxy settings or GRE tunnels from office routers. This allows a network operator to filter on policies not merely by the domain but by the specific URL.

Introducing Cloudflare for Teams

Building the Best-in-Class Network Gateway

While Gateway Basic (provisioned via DNS) and Gateway Pro (provisioned as a proxy) made sense, we wanted to imagine what the best-in-class network gateway would be for Enterprises that valued the highest level of performance and security. As we talked to these organizations we heard an ever-present concern: just surfing the Internet created risk of unauthorized code compromising devices. With every page that every user visited, third party code (JavaScript, etc.) was being downloaded and executed on their devices.

The solution, they suggested, was to isolate the local browser from third party code and have websites render in the network. This technology is known as browser isolation. And, in theory, it’s a great idea. Unfortunately, in practice with current technology, it doesn’t perform well. The most common way the browser isolation technology works is to render the page on a server and then push a bitmap of the page down to the browser. This is known as pixel pushing. The challenge is that can be slow, bandwidth intensive, and it breaks many sophisticated web applications.

We were hopeful that we could solve some of these problems by moving the rendering of the pages to Cloudflare’s network, which would be closer to end users. So we talked with many of the leading browser isolation companies about potentially partnering. Unfortunately, as we experimented with their technologies, even with our vast network, we couldn’t overcome the sluggish feel that plagues existing browser isolation solutions.

Enter S2 Systems

Introducing Cloudflare for Teams

That’s when we were introduced to S2 Systems. I clearly remember first trying the S2 demo because my first reaction was: “This can’t be working correctly, it’s too fast.” The S2 team had taken a different approach to browser isolation. Rather than trying to push down a bitmap of what the screen looked like, instead they pushed down the vectors to draw what’s on the screen. The result was an experience that was typically at least as fast as browsing locally and without broken pages.

The best, albeit imperfect, analogy I’ve come up with to describe the difference between S2’s technology and other browser isolation companies is the difference between WindowsXP and MacOS X when they were both launched in 2001. WindowsXP’s original graphics were based on bitmapped images. MacOS X were based on vectors. Remember the magic of watching an application “genie” in and out the MacOS X doc? Check it out in a video from the launch…

At the time watching a window slide in and out of the dock seemed like magic compared with what you could do with bitmapped user interfaces. You can hear the awe in the reaction from the audience. That awe that we’ve all gotten used to in UIs today comes from the power of vector images. And, if you’ve been underwhelmed by the pixel-pushed bitmaps of existing browser isolation technologies, just wait until you see what is possible with S2’s technology.

Introducing Cloudflare for Teams

We were so impressed with the team and the technology that we acquired the company. We will be integrating the S2 technology into Cloudflare Gateway Enterprise. The browser isolation technology will run across Cloudflare’s entire global network, bringing it within milliseconds of virtually every Internet user. You can learn more about this approach in Darren Remington's blog post.

Once the rollout is complete in the second half of 2020 we expect we will be able to offer the first full browser isolation technology that doesn’t force you to sacrifice performance. In the meantime, if you’d like a demo of the S2 technology in action, let us know.

The Promise of a Faster Internet for Everyone

Cloudflare’s mission is to help build a better Internet. With Cloudflare for Teams, we’ve extended that network to protect the people and organizations that use the Internet to do their jobs. We’re excited to help a more modern, mobile, and cloud-enabled Internet be safer and faster than it ever was with traditional hardware appliances.

But the same technology we’re deploying now to improve enterprise security holds further promise. The most interesting Internet applications keep getting more complicated and, in turn, requiring more bandwidth and processing power to use.

For those of us fortunate enough to be able to afford the latest iPhone, we continue to reap the benefits of an increasingly powerful set of Internet-enabled tools. But try and use the Internet on a mobile phone from a few generations back, and you can see how quickly the latest Internet applications leaves legacy devices behind. That’s a problem if we want to bring the next 4 billion Internet users online.

We need a paradigm shift if the sophistication of applications and complexity of interfaces continues to keep pace with the latest generation of devices. To make the best of the Internet available to everyone, we may need to shift the work of the Internet off the end devices we all carry around in our pockets and let the network — where power, bandwidth, and CPU are relatively plentiful — carry more of the load.

That’s the long term promise of what S2’s technology combined with Cloudflare’s network may someday power. If we can make it so a less expensive device can run the latest Internet applications — using less battery, bandwidth, and CPU than ever before possible — then we can make the Internet more affordable and accessible for everyone.

We started with Cloudflare for Infrastructure. Today we’re announcing Cloudflare for Teams. But our ambition is nothing short of Cloudflare for Everyone.

Early Feedback on Cloudflare for Teams from Customers and Partners

Introducing Cloudflare for Teams

"Cloudflare Access has enabled Ziff Media Group to seamlessly and securely deliver our suite of internal tools to employees around the world on any device, without the need for complicated network configurations,” said Josh Butts, SVP Product & Technology, Ziff Media Group.


Introducing Cloudflare for Teams

“VPNs are frustrating and lead to countless wasted cycles for employees and the IT staff supporting them,” said Amod Malviya, Cofounder and CTO, Udaan. “Furthermore, conventional VPNs can lull people into a false sense of security. With Cloudflare Access, we have a far more reliable, intuitive, secure solution that operates on a per user, per access basis. I think of it as Authentication 2.0 — even 3.0”


Introducing Cloudflare for Teams

“Roman makes healthcare accessible and convenient,” said Ricky Lindenhovius, Engineering Director, Roman Health. “Part of that mission includes connecting patients to physicians, and Cloudflare helps Roman securely and conveniently connect doctors to internally managed tools. With Cloudflare, Roman can evaluate every request made to internal applications for permission and identity, while also improving speed and user experience.”


Introducing Cloudflare for Teams

“We’re excited to partner with Cloudflare to provide our customers an innovative approach to enterprise security that combines the benefits of endpoint protection and network security," said Tom Barsi, VP Business Development, VMware. "VMware Carbon Black is a leading endpoint protection platform (EPP) and offers visibility and control of laptops, servers, virtual machines, and cloud infrastructure at scale. In partnering with Cloudflare, customers will have the ability to use VMware Carbon Black’s device health as a signal in enforcing granular authentication to a team’s internally managed application via Access, Cloudflare’s Zero Trust solution. Our joint solution combines the benefits of endpoint protection and a zero trust authentication solution to keep teams working on the Internet more secure."


Introducing Cloudflare for Teams

“Rackspace is a leading global technology services company accelerating the value of the cloud during every phase of our customers’ digital transformation,” said Lisa McLin, vice president of alliances and channel chief at Rackspace. “Our partnership with Cloudflare enables us to deliver cutting edge networking performance to our customers and helps them leverage a software defined networking architecture in their journey to the cloud.”


Introducing Cloudflare for Teams

“Employees are increasingly working outside of the traditional corporate headquarters. Distributed and remote users need to connect to the Internet, but today’s security solutions often require they backhaul those connections through headquarters to have the same level of security,” said Michael Kenney, head of strategy and business development for Ingram Micro Cloud. “We’re excited to work with Cloudflare whose global network helps teams of any size reach internally managed applications and securely use the Internet, protecting the data, devices, and team members that power a business.”


Introducing Cloudflare for Teams

"At Okta, we’re on a mission to enable any organization to securely use any technology. As a leading provider of identity for the enterprise, Okta helps organizations remove the friction of managing their corporate identity for every connection and request that their users make to applications. We’re excited about our partnership with Cloudflare and bringing seamless authentication and connection to teams of any size,” said Chuck Fontana, VP, Corporate & Business Development, Okta.


Introducing Cloudflare for Teams

"Organizations need one unified place to see, secure, and manage their endpoints,” said Matt Hastings, Senior Director of Product Management at Tanium. “We are excited to partner with Cloudflare to help teams secure their data, off-network devices, and applications. Tanium’s platform provides customers with a risk-based approach to operations and security with instant visibility and control into their endpoints. Cloudflare helps extend that protection by incorporating device data to enforce security for every connection made to protected resources.”


Introducing Cloudflare for Teams

“OneLogin is happy to partner with Cloudflare to advance security teams' identity control in any environment, whether on-premise or in the cloud, without compromising user performance," said Gary Gwin, Senior Director of Product at OneLogin. "OneLogin’s identity and access management platform securely connects people and technology for every user, every app, and every device. The OneLogin and Cloudflare for Teams integration provides a comprehensive identity and network control solution for teams of all sizes.”


Introducing Cloudflare for Teams

“Ping Identity helps enterprises improve security and user experience across their digital businesses,” said Loren Russon, Vice President of Product Management, Ping Identity. “Cloudflare for Teams integrates with Ping Identity to provide a comprehensive identity and network control solution to teams of any size, and ensures that only the right people get the right access to applications, seamlessly and securely."


Introducing Cloudflare for Teams

"Our customers increasingly leverage deep observability data to address both operational and security use cases, which is why we launched Datadog Security Monitoring," said Marc Tremsal, Director of Product Management at Datadog. "Our integration with Cloudflare already provides our customers with visibility into their web and DNS traffic; we're excited to work together as Cloudflare for Teams expands this visibility to corporate environments."


Introducing Cloudflare for Teams

“As more companies support employees who work on corporate applications from outside of the office, it is vital that they understand each request users are making. They need real-time insights and intelligence to react to incidents and audit secure connections," said John Coyle, VP of Business Development, Sumo Logic. "With our partnership with Cloudflare, customers can now log every request made to internal applications and automatically push them directly to Sumo Logic for retention and analysis."


Introducing Cloudflare for Teams

“Cloudgenix is excited to partner with Cloudflare to provide an end-to-end security solution from the branch to the cloud. As enterprises move off of expensive legacy MPLS networks and adopt branch to internet breakout policies, the CloudGenix CloudBlade platform and Cloudflare for Teams together can make this transition seamless and secure. We’re looking forward to Cloudflare’s roadmap with this announcement and partnership opportunities in the near term.” said Aaron Edwards, Field CTO, Cloudgenix.


Introducing Cloudflare for Teams

“In the face of limited cybersecurity resources, organizations are looking for highly automated solutions that work together to reduce the likelihood and impact of today’s cyber risks,” said Akshay Bhargava, Chief Product Officer, Malwarebytes. “With Malwarebytes and Cloudflare together, organizations are deploying more than twenty layers of security defense-in-depth. Using just two solutions, teams can secure their entire enterprise from device, to the network, to their internal and external applications.”


Introducing Cloudflare for Teams

"Organizations' sensitive data is vulnerable in-transit over the Internet and when it's stored at its destination in public cloud, SaaS applications and endpoints,” said Pravin Kothari, CEO of CipherCloud. “CipherCloud is excited to partner with Cloudflare to secure data in all stages, wherever it goes. Cloudflare’s global network secures data in-transit without slowing down performance. CipherCloud CASB+ provides a powerful cloud security platform with end-to-end data protection and adaptive controls for cloud environments, SaaS applications and BYOD endpoints. Working together, teams can rely on integrated Cloudflare and CipherCloud solution to keep data always protected without compromising user experience.”


07:00

Security on the Internet with Cloudflare for Teams [The Cloudflare Blog]

Security on the Internet with Cloudflare for Teams
Security on the Internet with Cloudflare for Teams

Your experience using the Internet has continued to improve over time. It’s gotten faster, safer, and more reliable. However, you probably have to use a different, worse, equivalent of it when you do your work. While the Internet kept getting better, businesses and their employees were stuck using their own private networks.

In those networks, teams hosted their own applications, stored their own data, and protected all of it by building a castle and moat around that private world. This model hid internally managed resources behind VPN appliances and on-premise firewall hardware. The experience was awful, for users and administrators alike. While the rest of the Internet became more performant and more reliable, business users were stuck in an alternate universe.

That legacy approach was less secure and slower than teams wanted, but the corporate perimeter mostly worked for a time. However, that began to fall apart with the rise of cloud-delivered applications. Businesses migrated to SaaS versions of software that previously lived in that castle and behind that moat. Users needed to connect to the public Internet to do their jobs, and attackers made the Internet unsafe in sophisticated, unpredictable ways - which opened up every business to  a new world of never-ending risks.

How did enterprise security respond? By trying to solve a new problem with a legacy solution, and forcing the Internet into equipment that was only designed for private, corporate networks. Instead of benefitting from the speed and availability of SaaS applications, users had to backhaul Internet-bound traffic through the same legacy boxes that made their private network miserable.

Teams then watched as their bandwidth bills increased. More traffic to the Internet from branch offices forced more traffic over expensive, dedicated links. Administrators now had to manage a private network and the connections to the entire Internet for their users, all with the same hardware. More traffic required more hardware and the cycle became unsustainable.

Cloudflare’s first wave of products secured and improved the speed of those sites by letting customers, from free users to some of the largest properties on the Internet, replace that hardware stack with Cloudflare’s network. We could deliver capacity at a scale that would be impossible for nearly any company to build themselves. We deployed data centers in over 200 cities around the world that help us reach users wherever they are.

We built a unique network to let sites scale how they secured infrastructure on the Internet with their own growth. But internally, businesses and their employees were stuck using their own private networks.

Just as we helped organizations secure their infrastructure by replacing boxes, we can do the same for their teams and their data. Today, we’re announcing a new platform that applies our network, and everything we’ve learned, to make the Internet faster and safer for teams.
Cloudflare for Teams protects enterprises, devices, and data by securing every connection without compromising user performance. The speed, reliability and protection we brought to securing infrastructure is extended to everything your team does on the Internet.

The legacy world of corporate security

Organizations all share three problems they need to solve at the network level:

  1. Secure team member access to internally managed applications
  2. Secure team members from threats on the Internet
  3. Secure the corporate data that lives in both environments

Each of these challenges pose a real risk to any team. If any component is compromised, the entire business becomes vulnerable.

Internally managed applications

Solving the first bucket, internally managed applications, started by building a perimeter around those internal resources. Administrators deployed applications on a private network and users outside of the office connected to them with client VPN agents through VPN appliances that lived back on-site.

Users hated it, and they still do, because it made it harder to get their jobs done. A sales team member traveling to a customer visit in the back of a taxi had to start a VPN client on their phone just to review details about the meeting. An engineer working remotely had to sit and wait as every connection they made to developer tools was backhauled  through a central VPN appliance.

Administrators and security teams also had issues with this model. Once a user connects to the private network, they’re typically able to reach multiple resources without having to prove they’re authorized to do so . Just because I’m able to enter the front door of an apartment building, doesn’t mean I should be able to walk into any individual apartment. However, on private networks, enforcing additional security within the bounds of the private network required complicated microsegmentation, if it was done at all.

Threats on the Internet

The second challenge, securing users connecting to SaaS tools on the public Internet and applications in the public cloud, required security teams to protect against known threats and potential zero-day attacks as their users left the castle and moat.

How did most companies respond? By forcing all traffic leaving branch offices or remote users back through headquarters and using the same hardware that secured their private network to try and build a perimeter around the Internet, at least the Internet their users accessed. All of the Internet-bound traffic leaving a branch office in Asia, for example, would be sent back through a central location in Europe, even if the destination was just down the street.

Organizations needed those connections to be stable, and to prioritize certain functions like voice and video, so they paid carriers to support dedicated multi-protocol label switching (MPLS) links. MPLS delivered improved performance by applying label switching to traffic which downstream routers can forward without needing to perform an IP lookup, but was eye-wateringly expensive.

Securing data

The third challenge, keeping data safe, became a moving target. Organizations had to keep data secure in a consistent way as it lived and moved between private tools on corporate networks and SaaS applications like Salesforce or Office 365.

The answer? More of the same. Teams backhauled traffic over MPLS links to a place where data could be inspected, adding more latency and introducing more hardware that had to be maintained.

What changed?

The balance of internal versus external traffic began to shift as SaaS applications became the new default for small businesses and Fortune 500s alike. Users now do most of their work on the Internet, with tools like Office 365 continuing to gain adoption. As those tools become more popular, more data leaves the moat and lives on the public Internet.

User behavior also changed. Users left the office and worked from multiple devices, both managed and unmanaged. Teams became more distributed and the perimeter was stretched to its limit.

This caused legacy approaches to fail

Legacy approaches to corporate security pushed the  castle and moat model further out. However, that model simply cannot scale with how users do work on the Internet today.

Internally managed applications

Private networks give users headaches, but they’re also a constant and complex chore to maintain. VPNs require expensive equipment that must be upgraded or expanded and, as more users leave the office, that equipment must try and scale up.

The result is a backlog of IT help desk tickets as users struggle with their VPN and, on the other side of the house, administrators and security teams try to put band-aids on the approach.

Threats on the Internet

Organizations initially saved money by moving to SaaS tools, but wound up spending more money over time as their traffic increased and bandwidth bills climbed.

Additionally, threats evolve. The traffic sent back to headquarters was secured with static models of scanning and filtering using hardware gateways. Users were still vulnerable to new types of threats that these on-premise boxes did not block yet.

Securing data

The cost of keeping data secure in both environments also grew. Security teams attempted to inspect Internet-bound traffic for threats and data loss by backhauling branch office traffic through on-premise hardware, degrading speed and increasing bandwidth fees.

Even more dangerous, data now lived permanently outside of that castle and moat model. Organizations were now vulnerable to attacks that bypassed their perimeter and targeted SaaS applications directly.

How will Cloudflare solve these problems?

Cloudflare for Teams consists of two products, Cloudflare Access and Cloudflare Gateway.

Security on the Internet with Cloudflare for Teams

We launched Access last year and are excited to bring it into Cloudflare for Teams. We built Cloudflare Access to solve the first challenge that corporate security teams face: protecting internally managed applications.

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

Administrators then build rules to decide who should authenticate to and reach the tools protected by Access. Whether those resources are virtual machines powering business operations or internal web applications, like Jira or iManage, when a user needs to connect, they pass through Cloudflare first.

When users need to connect to the tools behind Access, they are prompted to authenticate with their team’s SSO and, if valid, are instantly connected to the application without being slowed down. Internally-managed apps suddenly feel like SaaS products, and the login experience is seamless and familiar

Behind the scenes, every request made to those internal tools hits Cloudflare first where we enforce identity-based policies. Access evaluates and logs every request to those apps for identity, to give administrators more visibility and to offer more security than a traditional VPN.

Security on the Internet with Cloudflare for Teams

Every Cloudflare data center, in 200 cities around the world, performs the entire authentication check. Users connect faster, wherever they are working, versus having to backhaul traffic to a home office.

Access also saves time for administrators. Instead of configuring complex and error-prone network policies, IT teams build policies that enforce authentication using their identity provider. Security leaders can control who can reach internal applications in a single pane of glass and audit comprehensive logs from one source.

In the last year, we’ve released features that expand how teams can use Access so they can fully eliminate their VPN. We’ve added support for RDP, SSH, and released support for short-lived certificates that replace static keys. However, teams also use applications that do not run in infrastructure they control, such as SaaS applications like Box and Office 365. To solve that challenge, we’re releasing a new product, Cloudflare Gateway.

Security on the Internet with Cloudflare for Teams

Cloudflare Gateway secures teams by making the first destination a Cloudflare data center located near them, for all outbound traffic. The product places Cloudflare’s global network between users and the Internet, rather than forcing the Internet through legacy hardware on-site.

Cloudflare Gateway’s first feature begins by preventing users from running into phishing scams or malware sites by combining the world’s fastest DNS resolver with Cloudflare’s threat intelligence. Gateway resolver can be deployed to office networks and user devices in a matter of minutes. Once configured, Gateway actively blocks potential malware and phishing sites while also applying content filtering based on policies administrators configure.

However, threats can be hidden in otherwise healthy hostnames. To protect users from more advanced threats, Gateway will audit URLs and, if enabled, inspect  packets to find potential attacks before they compromise a device or office network. That same deep packet inspection can then be applied to prevent the accidental or malicious export of data.

Organizations can add Gateway’s advanced threat prevention in two models:

  1. by connecting office networks to the Cloudflare security fabric through GRE tunnels and
  2. by distributing forward proxy clients to mobile devices.
Security on the Internet with Cloudflare for Teams

The first model, delivered through Cloudflare Magic Transit, will give enterprises a way to migrate to Gateway without disrupting their current workflow. Instead of backhauling office traffic to centralized on-premise hardware, teams will point traffic to Cloudflare over GRE tunnels. Once the outbound traffic arrives at Cloudflare, Gateway can apply file type controls, in-line inspection, and data loss protection without impacting connection performance. Simultaneously, Magic Transit protects a corporate IP network from inbound attacks.

When users leave the office, Gateway’s client application will deliver the same level of Internet security. Every connection from the device will pass through Cloudflare first, where Gateway can apply threat prevention policies. Cloudflare can also deliver that security without compromising user experience, building on new technologies like the WireGuard protocol and integrating features from Cloudflare Warp, our popular individual forward proxy.

In both environments, one of the most common vectors for attacks is still the browser. Zero-day threats can compromise devices by using the browser as a vehicle to execute code.

Existing browser isolation solutions attempt to solve this challenge in one of two approaches: 1) pixel pushing and 2) DOM reconstruction. Both approaches lead to tradeoffs in performance and security. Pixel pushing degrades speed while also driving up the cost to stream sessions to users. DOM reconstruction attempts to strip potentially harmful content before sending it to the user. That tactic relies on known vulnerabilities and is still exposed to the zero day threats that isolation tools were meant to solve.

Cloudflare Gateway will feature always-on browser isolation that not only protects users from zero day threats, but can also make browsing the Internet faster. The solution will apply a patented approach to send vector commands that a browser can render without the need for an agent on the device. A user’s browser session will instead run in a Cloudflare data center where Gateway destroys the instance at the end of each session, keeping malware away from user devices without compromising performance.

When deployed, remote browser sessions will run in one of Cloudflare’s 200 data centers, connecting users to a faster, safer model of navigating the Internet without the compromises of legacy approaches. If you would like to learn more about this approach to browser isolation, I'd encourage you to read Darren Remington's blog post on the topic.

Why Cloudflare?

To make infrastructure safer, and web properties faster, Cloudflare built out one of the world’s largest and most sophisticated networks. Cloudflare for Teams builds on that same platform, and all of its unique advantages.

Fast

Security should always be bundled with performance. Cloudflare’s infrastructure products delivered better protection while also improving speed. That’s possible because of the network we’ve built, both its distribution and how the data we have about the network allows Cloudflare to optimize requests and connections.

Cloudflare for Teams brings that same speed to end users by using that same network and route optimization. Additionally, Cloudflare has built industry-leading components that will become features of this new platform. All of these components leverage Cloudflare’s network and scale to improve user performance.

Gateway’s DNS-filtering features build on Cloudflare’s 1.1.1.1 public DNS resolver, the world’s fastest resolver according to DNSPerf. To protect entire connections, Cloudflare for Teams will deploy the same technology that underpins Warp, a new type of VPN with consistently better reviews than competitors.

Massive scalability

Cloudflare’s 30 TBps of network capacity can scale to meet the needs of nearly any enterprise. Customers can stop worrying about buying enough hardware to meet their organization’s needs and, instead, replace it with Cloudflare.

Near users, wherever they are — literally

Cloudflare’s network operates in 200 cities and more than 90 countries around the world, putting Cloudflare’s security and performance close to users, wherever they work.

That network includes presence in global headquarters, like London and New York, but also in traditionally underserved regions around the world.

Cloudflare data centers operate within 100 milliseconds of 99% of Internet-connected population in the developed world, and within 100 milliseconds of 94% of the Internet-connected population globally. All of your end users should feel like they have the performance traditionally only available to those in headquarters.

Easier for administrators

When security products are confusing, teams make mistakes that become incidents. Cloudflare’s solution is straightforward and easy to deploy. Most security providers in this market built features first and never considered usability or implementation.

Cloudflare Access can be deployed in less than an hour; Gateway features will build on top of that dashboard and workflow. Cloudflare for Teams brings the same ease-of-use of our tools that protect infrastructure to the products that new secure users, devices, and data.

Better threat intelligence

Cloudflare’s network already secures more than 20 million Internet properties and blocks 72 billion cyber threats each day. We build products using the threat data we gather from protecting 11 million HTTP requests per second on average.

What’s next?

Cloudflare Access is available right now. You can start replacing your team’s VPN with Cloudflare’s network today. Certain features of Cloudflare Gateway are available in beta now, and others will be added in beta over time. You can sign up to be notified about Gateway now.

07:00

Cloudflare + Remote Browser Isolation [The Cloudflare Blog]

Cloudflare + Remote Browser Isolation

Cloudflare announced today that it has purchased S2 Systems Corporation, a Seattle-area startup that has built an innovative remote browser isolation solution unlike any other currently in the market. The majority of endpoint compromises involve web browsers — by putting space between users’ devices and where web code executes, browser isolation makes endpoints substantially more secure. In this blog post, I’ll discuss what browser isolation is, why it is important, how the S2 Systems cloud browser works, and how it fits with Cloudflare’s mission to help build a better Internet.

What’s wrong with web browsing?

It’s been more than 30 years since Tim Berners-Lee wrote the project proposal defining the technology underlying what we now call the world wide web. What Berners-Lee envisioned as being useful for “several thousand people, many of them very creative, all working toward common goals[1] has grown to become a fundamental part of commerce, business, the global economy, and an integral part of society used by more than 58% of the world’s population[2].

The world wide web and web browsers have unequivocally become the platform for much of the productive work (and play) people do every day. However, as the pervasiveness of the web grew, so did opportunities for bad actors. Hardly a day passes without a major new cybersecurity breach in the news. Several contributing factors have helped propel cybercrime to unprecedented levels: the commercialization of hacking tools, the emergence of malware-as-a-service, the presence of well-financed nation states and organized crime, and the development of cryptocurrencies which enable malicious actors of all stripes to anonymously monetize their activities.

The vast majority of security breaches originate from the web. Gartner calls the public Internet a “cesspool of attacks” and identifies web browsers as the primary culprit responsible for 70% of endpoint compromises.[3] This should not be surprising. Although modern web browsers are remarkable, many fundamental architectural decisions were made in the 1990’s before concepts like security, privacy, corporate oversight, and compliance were issues or even considerations. Core web browsing functionality (including the entire underlying WWW architecture) was designed and built for a different era and circumstances.

In today’s world, several web browsing assumptions are outdated or even dangerous. Web browsers and the underlying server technologies encompass an extensive – and growing – list of complex interrelated technologies. These technologies are constantly in flux, driven by vibrant open source communities, content publishers, search engines, advertisers, and competition between browser companies. As a result of this underlying complexity, web browsers have become primary attack vectors. According to Gartner, “the very act of users browsing the internet and clicking on URL links opens the enterprise to significant risk. […] Attacking thru the browser is too easy, and the targets too rich.[4] Even “ostensibly ‘good’ websites are easily compromised and can be used to attack visitors” (Gartner[5]) with more than 40% of malicious URLs found on good domains (Webroot[6]). (A complete list of vulnerabilities is beyond the scope of this post.)

The very structure and underlying technologies that power the web are inherently difficult to secure. Some browser vulnerabilities result from illegitimate use of legitimate functionality: enabling browsers to download files and documents is good, but allowing downloading of files infected with malware is bad; dynamic loading of content across multiple sites within a single webpage is good, but cross-site scripting is bad; enabling an extensive advertising ecosystem is good, but the inability to detect hijacked links or malicious redirects to malware or phishing sites is bad; etc.

Enterprise Browsing Issues

Enterprises have additional challenges with traditional browsers.

Paradoxically, IT departments have the least amount of control over the most ubiquitous app in the enterprise – the web browser. The most common complaints about web browsers from enterprise security and IT professionals are:

  1. Security (obviously). The public internet is a constant source of security breaches and the problem is growing given an 11x escalation in attacks since 2016 (Meeker[7]). Costs of detection and remediation are escalating and the reputational damage and financial losses for breaches can be substantial.
  2. Control. IT departments have little visibility into user activity and limited ability to leverage content disarm and reconstruction (CDR) and data loss prevention (DLP) mechanisms including when, where, or who downloaded/upload files.
  3. Compliance. The inability to control data and activity across geographies or capture required audit telemetry to meet increasingly strict regulatory requirements. This results in significant exposure to penalties and fines.

Given vulnerabilities exposed through everyday user activities such as email and web browsing, some organizations attempt to restrict these activities. As both are legitimate and critical business functions, efforts to limit or curtail web browser use inevitably fail or have a substantive negative impact on business productivity and employee morale.

Current approaches to mitigating security issues inherent in browsing the web are largely based on signature technology for data files and executables, and lists of known good/bad URLs and DNS addresses. The challenge with these approaches is the difficulty of keeping current with known attacks (file signatures, URLs and DNS addresses) and their inherent vulnerability to zero-day attacks. Hackers have devised automated tools to defeat signature-based approaches (e.g. generating hordes of files with unknown signatures) and create millions of transient websites in order to defeat URL/DNS blacklists.

While these approaches certainly prevent some attacks, the growing number of incidents and severity of security breaches clearly indicate more effective alternatives are needed.

What is browser isolation?

The core concept behind browser isolation is security-through-physical-isolation to create a “gap” between a user’s web browser and the endpoint device thereby protecting the device (and the enterprise network) from exploits and attacks. Unlike secure web gateways, antivirus software, or firewalls which rely on known threat patterns or signatures, this is a zero-trust approach.

There are two primary browser isolation architectures: (1) client-based local isolation and (2) remote isolation.

Local browser isolation attempts to isolate a browser running on a local endpoint using app-level or OS-level sandboxing. In addition to leaving the endpoint at risk when there is an isolation failure, these systems require significant endpoint resources (memory + compute), tend to be brittle, and are difficult for IT to manage as they depend on support from specific hardware and software components.

Further, local browser isolation does nothing to address the control and compliance issues mentioned above.

Remote browser isolation (RBI) protects the endpoint by moving the browser to a remote service in the cloud or to a separate on-premises server within the enterprise network:

  • On-premises isolation simply relocates the risk from the endpoint to another location within the enterprise without actually eliminating the risk.
  • Cloud-based remote browsing isolates the end-user device and the enterprise’s network while fully enabling IT control and compliance solutions.

Given the inherent advantages, most browser isolation solutions – including S2 Systems – leverage cloud-based remote isolation. Properly implemented, remote browser isolation can protect the organization from browser exploits, plug-ins, zero-day vulnerabilities, malware and other attacks embedded in web content.

How does Remote Browser Isolation (RBI) work?

In a typical cloud-based RBI system (the blue-dashed box ❶ below), individual remote browsers ❷ are run in the cloud as disposable containerized instances – typically, one instance per user. The remote browser sends the rendered contents of a web page to the user endpoint device ❹ using a specific protocol and data format ❸. Actions by the user, such as keystrokes, mouse and scroll commands, are sent back to the isolation service over a secure encrypted channel where they are processed by the remote browser and any resulting changes to the remote browser webpage are sent back to the endpoint device.

Cloudflare + Remote Browser Isolation

In effect, the endpoint device is “remote controlling” the cloud browser. Some RBI systems use proprietary clients installed on the local endpoint while others leverage existing HTML5-compatible browsers on the endpoint and are considered ‘clientless.’

Data breaches that occur in the remote browser are isolated from the local endpoint and enterprise network. Every remote browser instance is treated as if compromised and terminated after each session. New browser sessions start with a fresh instance. Obviously, the RBI service must prevent browser breaches from leaking outside the browser containers to the service itself. Most RBI systems provide remote file viewers negating the need to download files but also have the ability to inspect files for malware before allowing them to be downloaded.

A critical component in the above architecture is the specific remoting technology employed by the cloud RBI service. The remoting technology has a significant impact on the operating cost and scalability of the RBI service, website fidelity and compatibility, bandwidth requirements, endpoint hardware/software requirements and even the user experience. Remoting technology also determines the effective level of security provided by the RBI system.

All current cloud RBI systems employ one of two remoting technologies:

(1)    Pixel pushing is a video-based approach which captures pixel images of the remote browser ‘window’ and transmits a sequence of images to the client endpoint browser or proprietary client. This is similar to how remote desktop and VNC systems work. Although considered to be relatively secure, there are several inherent challenges with this approach:

  • Continuously encoding and transmitting video streams of remote webpages to user endpoint devices is very costly. Scaling this approach to millions of users is financially prohibitive and logistically complex.
  • Requires significant bandwidth. Even when highly optimized, pushing pixels is bandwidth intensive.
  • Unavoidable latency results in an unsatisfactory user experience. These systems tend to be slow and generate a lot of user complaints.
  • Mobile support is degraded by high bandwidth requirements compounded by inconsistent connectivity.
  • HiDPI displays may render at lower resolutions. Pixel density increases exponentially with resolution which means remote browser sessions (particularly fonts) on HiDPI devices can appear fuzzy or out of focus.

(2) DOM reconstruction emerged as a response to the shortcomings of pixel pushing. DOM reconstruction attempts to clean webpage HTML, CSS, etc. before forwarding the content to the local endpoint browser. The underlying HTML, CSS, etc., are reconstructed in an attempt to eliminate active code, known exploits, and other potentially malicious content. While addressing the latency, operational cost, and user experience issues of pixel pushing, it introduces two significant new issues:

  • Security. The underlying technologies – HTML, CSS, web fonts, etc. – are the attack vectors hackers leverage to breach endpoints. Attempting to remove malicious content or code is like washing mosquitos: you can attempt to clean them, but they remain inherent carriers of dangerous and malicious material. It is impossible to identify, in advance, all the means of exploiting these technologies even through an RBI system.
  • Website fidelity. Inevitably, attempting to remove malicious active code, reconstructing HTML, CSS and other aspects of modern websites results in broken pages that don’t render properly or don’t render at all. Websites that work today may not work tomorrow as site publishers make daily changes that may break DOM reconstruction functionality. The result is an infinite tail of issues requiring significant resources in an endless game of whack-a-mole. Some RBI solutions struggle to support common enterprise-wide services like Google G Suite or Microsoft Office 365 even as malware laden web email continues to be a significant source of breaches.
Cloudflare + Remote Browser Isolation

Customers are left to choose between a secure solution with a bad user experience and high operating costs, or a faster, much less secure solution that breaks websites. These tradeoffs have driven some RBI providers to implement both remoting technologies into their products. However, this leaves customers to pick their poison without addressing the fundamental issues.

Given the significant tradeoffs in RBI systems today, one common optimization for current customers is to deploy remote browsing capabilities to only the most vulnerable users in an organization such as high-risk executives, finance, business development, or HR employees. Like vaccinating half the pupils in a classroom, this results in a false sense of security that does little to protect the larger organization.

Unfortunately, the largest “gap” created by current remote browser isolation systems is the void between the potential of the underlying isolation concept and the implementation reality of currently available RBI systems.

S2 Systems Remote Browser Isolation

S2 Systems remote browser isolation is a fundamentally different approach based on S2-patented technology called Network Vector Rendering (NVR).

The S2 remote browser is based on the open-source Chromium engine on which Google Chrome is built. In addition to powering Google Chrome which has a ~70% market share[8], Chromium powers twenty-one other web browsers including the new Microsoft Edge browser.[9] As a result, significant ongoing investment in the Chromium engine ensures the highest levels of website support, compatibility and a continuous stream of improvements.

A key architectural feature of the Chromium browser is its use of the Skia graphics library. Skia is a widely-used cross-platform graphics engine for Android, Google Chrome, Chrome OS, Mozilla Firefox, Firefox OS, FitbitOS, Flutter, the Electron application framework and many other products. Like Chromium, the pervasiveness of Skia ensures ongoing broad hardware and platform support.

Cloudflare + Remote Browser Isolation
Skia code fragment

Everything visible in a Chromium browser window is rendered through the Skia rendering layer. This includes application window UI such as menus, but more importantly, the entire contents of the webpage window are rendered through Skia. Chromium compositing, layout and rendering are extremely complex with multiple parallel paths optimized for different content types, device contexts, etc. The following figure is an egregious simplification for illustration purposes of how S2 works (apologies to Chromium experts):

Cloudflare + Remote Browser Isolation

S2 Systems NVR technology intercepts the remote Chromium browser’s Skia draw commands ❶, tokenizes and compresses them, then encrypts and transmits them across the wire ❷ to any HTML5 compliant web browser ❸ (Chrome, Firefox, Safari, etc.) running locally on the user endpoint desktop or mobile device. The Skia API commands captured by NVR are pre-rasterization which means they are highly compact.

On first use, the S2 RBI service transparently pushes an NVR WebAssembly (Wasm) library ❹ to the local HTML5 web browser on the endpoint device where it is cached for subsequent use. The NVR Wasm code contains an embedded Skia library and the necessary code to unpack, decrypt and “replay” the Skia draw commands from the remote RBI server to the local browser window. A WebAssembly’s ability to “execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms[10] results in near-native drawing performance.

The S2 remote browser isolation service uses headless Chromium-based browsers in the cloud, transparently intercepts draw layer output, transmits the draw commands efficiency and securely over the web, and redraws them in the windows of local HTML5 browsers. This architecture has a number of technical advantages:

(1)    Security: the underlying data transport is not an existing attack vector and customers aren’t forced to make a tradeoff between security and performance.

(2)    Website compatibility: there are no website compatibility issues nor long tail chasing evolving web technologies or emerging vulnerabilities.

(3)    Performance: the system is very fast, typically faster than local browsing (subject of a future blog post).

(4)    Transparent user experience: S2 remote browsing feels like native browsing; users are generally unaware when they are browsing remotely.

(5)    Requires less bandwidth than local browsing for most websites. Enables advanced caching and other proprietary optimizations unique to web browsers and the nature of web content and technologies.

(6)    Clientless: leverages existing HTML5 compatible browsers already installed on user endpoint desktop and mobile devices.

(7)    Cost-effective scalability: although the details are beyond the scope of this post, the S2 backend and NVR technology have substantially lower operating costs than existing RBI technologies. Operating costs translate directly to customer costs. The S2 system was designed to make deployment to an entire enterprise and not just targeted users (aka: vaccinating half the class) both feasible and attractive for customers.

(8)    RBI-as-a-platform: enables implementation of related/adjacent services such as DLP, content disarm & reconstruction (CDR), phishing detection and prevention, etc.

S2 Systems Remote Browser Isolation Service and underlying NVR technology eliminates the disconnect between the conceptual potential and promise of browser isolation and the unsatisfying reality of current RBI technologies.

Cloudflare + S2 Systems Remote Browser Isolation

Cloudflare’s global cloud platform is uniquely suited to remote browsing isolation. Seamless integration with our cloud-native performance, reliability and advanced security products and services provides powerful capabilities for our customers.

Our Cloudflare Workers architecture enables edge computing in 200 cities in more than 90 countries and will put a remote browser within 100 milliseconds of 99% of the Internet-connected population in the developed world. With more than 20 million Internet properties directly connected to our network, Cloudflare remote browser isolation will benefit from locally cached data and builds on the impressive connectivity and performance of our network. Our Argo Smart Routing capability leverages our communications backbone to route traffic across faster and more reliable network paths resulting in an average 30% faster access to web assets.

Once it has been integrated with our Cloudflare for Teams suite of advanced security products, remote browser isolation will provide protection from browser exploits, zero-day vulnerabilities, malware and other attacks embedded in web content. Enterprises will be able to secure the browsers of all employees without having to make trade-offs between security and user experience. The service will enable IT control of browser-conveyed enterprise data and compliance oversight. Seamless integration across our products and services will enable users and enterprises to browse the web without fear or consequence.

Cloudflare’s mission is to help build a better Internet. This means protecting users and enterprises as they work and play on the Internet; it means making Internet access fast, reliable and transparent. Reimagining and modernizing how web browsing works is an important part of helping build a better Internet.


[1] https://www.w3.org/History/1989/proposal.html

[2] “Internet World Stats,”https://www.internetworldstats.com/, retrieved 12/21/2019.

[3] “Innovation Insight for Remote Browser Isolation,” (report ID: G00350577) Neil MacDonald, Gartner Inc, March 8, 2018”

[4] Gartner, Inc., Neil MacDonald, “Innovation Insight for Remote Browser Isolation”, 8 March 2018

[5] Gartner, Inc., Neil MacDonald, “Innovation Insight for Remote Browser Isolation”, 8 March 2018

[6] “2019 Webroot Threat Report: Forty Percent of Malicious URLs Found on Good Domains”, February 28, 2019

[7] “Kleiner Perkins 2018 Internet Trends”, Mary Meeker.

[8] https://www.statista.com/statistics/544400/market-share-of-internet-browsers-desktop/, retrieved December 21, 2019

[9] https://en.wikipedia.org/wiki/Chromium_(web_browser), retrieved December 29, 2019

[10] https://webassembly.org/, retrieved December 30, 2019

Monday, 06 January

03:00

Most read articles in 2019 not from 2019 [Fedora Magazine]

Some topics are very popular, no matter when they’re first mentioned. And Fedora Magazine has a few articles that have proven to be popular for a long time.

You’re reading the last article from the “best of 2019” series. But this time, it’s about articles written before 2019, but being very popular in 2019.

All of the articles below have been checked and updated to be correct even now, in early 2020. Let’s dive in!

i3 tiling window manager

Wish to try an alternative desktop? The following article introduces i3 — a tiling window manager that doesn’t require high-end hardware, but is powerful and highly customizable. You’ll learn about the installation process, some initial setup, and a few tricks to get you started.

Powerline

Would you like to have your shell a bit more organized? Then you might want to try Powerline — a utility that gives you status information, and some visual tweaks to your shell to make it more pleasant and organized.

Monospace fonts

Do you spend a lot of your time in terminal or a code editor? And is your font making you happy? Discover some beautiful monospace fonts available in the Fedora repositories.

Image viewers

Is the default image viewer on your desktop not working the way you want? The following article shows 17 image viewers available in Fedora — varying from simpler to ones full of features.

Fedora as a VirtualBox guest

Love Fedora but your machine runs Windows or macOS? One option to get Fedora running on your machine is virtualization. Your system keeps running and you’ll be able to access Fedora at the same time in a virtual machine. The following article introduces VirtualBox that can do just that.

Saturday, 04 January

10:00

Cloudflare Expanded to 200 Cities in 2019 [The Cloudflare Blog]

Cloudflare Expanded to 200 Cities in 2019
Cloudflare Expanded to 200 Cities in 2019

We have exciting news: Cloudflare closed out the decade by reaching our 200th city* across 90+ countries. Each new location increases the security, performance, and reliability of the 20-million-plus Internet properties on our network. Over the last quarter, we turned up seven data centers spanning from Chattogram, Bangladesh all the way to the Hawaiian Islands:

  • Chattogram & Dhaka, Bangladesh. These data centers are our first in Bangladesh, ensuring that its 161 million residents will have a better experience on our network.
  • Honolulu, Hawaii, USA. Honolulu is one of the most remote cities in the world; with our Honolulu data center up and running, Hawaiian visitors can be served 2,400 miles closer than ever before! Hawaii is a hub for many submarine cables in the Pacific, meaning that some Pacific Islands will also see significant improvements.
  • Adelaide, Australia. Our 7th Australasian data center can be found “down under” in the capital of South Australia. Despite being Australia’s fifth-largest city, Adelaide is often overlooked for Australian interconnection. We, for one, are happy to establish a presence in it and its unique UTC+9:30 time zone!
  • Thimphu, Bhutan. Bhutan is the seventh SAARC (South Asian Association for Regional Cooperation) country with a Cloudflare network presence. Thimphu is our first Bhutanese data center, continuing our mission of security and performance for all.
  • St George’s, Grenada. Our Grenadian data center is joining the Grenada Internet Exchange (GREX), the first non-profit Internet Exchange (IX) in the English-speaking Caribbean.

We’ve come a long way since our launch in 2010, moving from colocating in key Internet hubs to fanning out across the globe and partnering with local ISPs. This has allowed us to offer security, performance, and reliability to Internet users in all corners of the world. In addition to the 35 cities we added in 2019, we expanded our existing data centers behind-the-scenes. We believe there are a lot of opportunities to harness in 2020 as we look to bring our network and its edge-computing power closer and closer to everyone on the Internet.

*Includes cities where we have data centers with active Internet ports and those where we are configuring our servers to handle traffic for more customers (at the time of publishing).