Saturday, 18 January

03:00

98.6 Degrees Fahrenheit Isn't the Average Anymore [Slashdot]

schwit1 shares a report from The Wall Street Journal: Nearly 150 years ago, [German physician Carl Reinhold August Wunderlich] analyzed a million temperatures from 25,000 patients and concluded that normal human-body temperature is 98.6 degrees Fahrenheit. In a new study, researchers from Stanford University argue that Wunderlich's number was correct at the time but is no longer accurate because the human body has changed. Today, they say, the average normal human-body temperature is closer to 97.5 degrees Fahrenheit (Warning: source paywalled; alternative source). To test their hypothesis that today's normal body temperature is lower than in the past, Dr. Parsonnet and her research partners analyzed 677,423 temperatures collected from 189,338 individuals over a span of 157 years. The readings were recorded in the pension records of Civil War veterans from the start of the war through 1940; in the National Health and Nutrition Examination Survey I conducted by the U.S. Centers for Disease Control and Prevention from 1971 through 1974; and in the Stanford Translational Research Integrated Database Environment from 2007 through 2017. Overall, temperatures of the Civil War veterans were higher than measurements taken in the 1970s, and, in turn, those measurements were higher than those collected in the 2000s. The study has been published in the journal eLife.

Read more of this story at Slashdot.

00:52

Btrfs Async Discard Support Looks To Be Ready For Linux 5.6 [Phoronix]

After months of work by Facebook engineers, it looks like the new async discard support for Btrfs is ready for the upcoming Linux 5.6 cycle as a win for this Linux file-system on solid-state storage making use of TRIM/DISCARD functionality...

00:00

NBC's New Peacock Streaming Service Is Just One Big Ad-Injection Machine [Slashdot]

Comcast's NBCUniversal is launching a new streaming service in April called Peacock. With three pricing tiers from free to $10 per month, Comcast wants Peacock "to be an ad delivery system to destroy all others in its path," writes Ryan Waniata via Digital Trends. From the report: In a shockingly long investor call, NBC revealed its big new strategy for delivering its many intellectual property spoils online, which will be offered in a multi-tiered plan (with both ad-based and ad-free versions) rolling up a content hodge-podge, including NBCUniversal TV classics and films on-demand, a handful of new exclusive shows, and live content, from NBC News to the Tokyo Olympics. Peacock's ad-based service -- which rolls out first to the company's Xfinity and Flex cable customers from within their cable box -- will arrive in at least some form for zero dollars per month. A $5 monthly charge will get you more content (but still carry ads), while a $10 fee will get you ad-free viewing and the whole kit-and-caboodle. But here's the thing: The execs at Comcast don't even want you to buy that service. It's an also-ran. A red herring. NBCUniversal Chairman of Advertising & Partnerships Linda Yaccarino spoke vociferously to the crowd of investors, saying, "Peacock will define the future of advertising. The future of free." To hook viewers into their ad-loaded trap, NBC execs have leveraged Peacock to offer "the lightest ad load in the industry," with just 5 minutes of ads per hour. To be fair, that ad-to-content ratio would be quite light these days in TV talk. But, Yaccarino continued, these would be revolutionary new ad innovations for Peacock, including ads that won't be as repeated over and over. Ads that will look "as good as the content" they accompany (whatever that means). Solo ads where "brands become the hero" and offer a TV show brought to you by a single advertiser. Ads. Ads. And more ads.

Read more of this story at Slashdot.

Friday, 17 January

22:09

Debian Is Making The Process Easier To Bisect Itself Using Their Wayback Machine [Phoronix]

For a decade now snapshot.debian.org has been around for accessing old Debian packages and to find packages by dates and version numbers. Only now though is a guide materializing for leveraging this Debian "wayback machine" in order to help in bisecting regressions for the distribution that span multiple/unknown packages...

20:30

An Algorithm That Learns Through Rewards May Show How Our Brain Does Too [Slashdot]

An anonymous reader quotes a report from MIT Technology Review: In a paper published in Nature today, DeepMind, Alphabet's AI subsidiary, has once again used lessons from reinforcement learning to propose a new theory about the reward mechanisms within our brains. The hypothesis, supported by initial experimental findings, could not only improve our understanding of mental health and motivation. It could also validate the current direction of AI research toward building more human-like general intelligence. At a high level, reinforcement learning follows the insight derived from Pavlov's dogs: it's possible to teach an agent to master complex, novel tasks through only positive and negative feedback. An algorithm begins learning an assigned task by randomly predicting which action might earn it a reward. It then takes the action, observes the real reward, and adjusts its prediction based on the margin of error. Over millions or even billions of trials, the algorithm's prediction errors converge to zero, at which point it knows precisely which actions to take to maximize its reward and so complete its task. It turns out the brain's reward system works in much the same way -- a discovery made in the 1990s, inspired by reinforcement-learning algorithms. When a human or animal is about to perform an action, its dopamine neurons make a prediction about the expected reward. Once the actual reward is received, they then fire off an amount of dopamine that corresponds to the prediction error. A better reward than expected triggers a strong dopamine release, while a worse reward than expected suppresses the chemical's production. The dopamine, in other words, serves as a correction signal, telling the neurons to adjust their predictions until they converge to reality. The phenomenon, known as reward prediction error, works much like a reinforcement-learning algorithm. The improved algorithm changes the way it predicts rewards. "Whereas the old approach estimated rewards as a single number -- meant to equal the average expected outcome -- the new approach represents them more accurately as a distribution," the report says. This lends itself to a new hypothesis: Do dopamine neurons also predict rewards in the same distributional way? After testing this theory, DeepMind found "compelling evidence that the brain indeed uses distributional reward predictions to strengthen its learning algorithm," reports MIT Technology Review.

Read more of this story at Slashdot.

19:02

PopSockets CEO Calls Out Amazon's 'Bullying With a Smile' Tactics [Slashdot]

At a House Judiciary antitrust subcommittee on competition in the digital economy, PopSockets CEO and inventor David Barnett described how Amazon used shady tactics to pressure their smartphone accessory company. Mashable reports: "Multiple times we discovered that Amazon itself had sourced counterfeit product and was selling it alongside our own product," he noted. Barnett, under oath, told the gathered members of the House that Amazon initially played nice only to drop the hammer when it believed no one was watching. After agreeing to a written contract stipulating a price at which PopSockets would be sold on Amazon, the e-commerce giant would then allegedly unilaterally lower the price and demand that PopSockets make up the difference. Colorado Congressman Ed Perlmutter asked Barnett how Amazon could "ignore the contract that [PopSockets] entered into and just say, 'Sorry, that was our contract, but you got to lower your price.'" Barnett didn't mince words. "With coercive tactics, basically," he replied. "And these are tactics that are mainly executed by phone. It's one of the strangest relationships I've ever had with a retailer." Barnett emphasized that, on paper, the contract "appears to be negotiated in good faith." However, he claimed, this is followed by "... frequent phone calls. And on the phone calls we get what I might call bullying with a smile. Very friendly people that we deal with who say, 'By the way, we dropped the price of X product last week. We need you to pay for it.'" Barnett said he would push back and that's when "the threats come." He asserted that Amazon representatives would tell him over the phone: "If we don't get it, then we're going to source product from the gray market."

Read more of this story at Slashdot.

18:25

Google Parent Company Alphabet Hits $1 Trillion Market Cap [Slashdot]

Google parent-company Alphabet has hit $1 trillion in market capitalization, making it the fourth U.S. company to hit the milestone. CNBC reports: Apple was the first to hit the market cap milestone in 2018. Then, Microsoft and Amazon followed. Apple and Microsoft are still valued at more than a trillion dollars while Amazon has since fallen below the mark. Analysts are bullish on the company's newly appointed CEO, Sundar Pichai. In a surprise announcement in December 2019, Alphabet founder Larry Page announced plans to step down as CEO, along with co-founder and president Sergey Brin. Pichai had already been the CEO of Google, which includes all the company's core businesses -- including search, advertising, YouTube and Android -- and generates substantially all its revenue and profits. But he reported to Page, who also oversaw other businesses making long-term bets on experimental technology like self-driving cars and package delivery drones. Now, he's in charge of the whole conglomerate, although Page and Brin still have control over most of the company's voting shares, giving them significant influence in major decisions. "Optimism also comes from the company's growth in its Cloud business, which -- while still far behind the leader Amazon and runner-up Microsoft -- doubled its revenue run rate from $1 billion to $2 billion per quarter between Feb. 2018 and July 2019," adds CNBC.

Read more of this story at Slashdot.

18:17

It's Friday, the weekend has landed... and Microsoft warns of an Internet Explorer zero day exploited in the wild [The Register]

Plus, WeLeakInfo? Not anymore!

Roundup  Welcome to another Reg roundup of security news.…

17:45

Researchers Find Serious Flaws In WordPress Plugins Used On 400K Sites [Slashdot]

An anonymous reader quotes a report from Ars Technica: Serious vulnerabilities have recently come to light in three WordPress plugins that have been installed on a combined 400,000 websites, researchers said. InfiniteWP, WP Time Capsule, and WP Database Reset are all affected. The highest-impact flaw is an authentication bypass vulnerability in the InfiniteWP Client, a plugin installed on more than 300,000 websites. It allows administrators to manage multiple websites from a single server. The flaw lets anyone log in to an administrative account with no credentials at all. From there, attackers can delete contents, add new accounts, and carry out a wide range of other malicious tasks. The critical flaw in WP Time Capsule also leads to an authentication bypass that allows unauthenticated attackers to log in as an administrator. WP Time Capsule, which runs on about 20,000 sites, is designed to make backing up website data easier. By including a string in a POST request, attackers can obtain a list of all administrative accounts and automatically log in to the first one. The bug has been fixed in version 1.21.16. Sites running earlier versions should update right away. Web security firm WebARX has more details. The last vulnerable plugin is WP Database Reset, which is installed on about 80,000 sites. One flaw allows any unauthenticated person to reset any table in the database to its original WordPress state. The bug is caused by reset functions that aren't secured by the standard capability checks or security nonces. Exploits can result in the complete loss of data or a site reset to the default WordPress settings. A second security flaw in WP Database Reset causes a privilege-escalation vulnerability that allows any authenticated user -- even those with minimal system rights -- to gain administrative rights and lock out all other users. All site administrators using this plugin should update to version 3.15, which patches both vulnerabilities. Wordfence has more details about both flaws here.

Read more of this story at Slashdot.

17:25

More Benchmarks Of The Initial Performance Hit From CVE-2019-14615 On Intel Gen7 Graphics [Phoronix]

On Wednesday I shined the light on the initial performance hit from Intel's CVE-2019-14615 graphics vulnerability particularly striking their "Gen7" graphics hard. That initial testing was done with Core i7 hardware while here are results looking at the equally disturbing performance hits from Core i3 and i5 affected processors too...

17:03

It's Not Just You: Google Added Annoying Icons To Search On Desktop [Slashdot]

Kim Lyons, writing for The Verge: Google added tiny favicon icons to its search results this week for some reason, creating more clutter in what used to be a clean interface, and seemingly without actually improving the results or the user experience. The company says it's part of a plan to make clearer where information is coming from, but how? In my Chrome desktop browser, it feels like an aggravating, unnecessary change that doesn't actually help the user determine how good, bad, or reputable an actual search result might be. Yes, ads are still clearly marked with the word "ad," which is a good thing. But do I need to see Best Buy's logo or AT&T's blue circle when I search for "Samsung Fold" to know they're trying to sell me something? Google says the favicon icons are "helping searchers better understand where information is coming from, more easily scan results & decide what to explore." If you don't care for the new look, Google has instructions on how to change or add a favicon to search results. Lifehacker also has instructions on how to apply filters to undo the favicon nonsense.

Read more of this story at Slashdot.

16:32

You're not Boeing to believe this: Yet another show-stopping software bug found in ill-fated 737 Max airplanes [The Register]

Jetliner's return to the skies likely to be delayed by more tech glitches

Boeing today said another software flaw has been spotted in its star-crossed 737 Max.…

16:20

Best Buy Opens Probe Into CEO's Personal Conduct [Slashdot]

The board of Best Buy is investigating allegations that CEO Corie Barry had an inappropriate romantic relationship with a fellow executive (Warning: source paywalled; alternative source), who has since left the electronics retailer. The Wall Street Journal reports: The allegations were sent to the board in an anonymous letter dated Dec. 7. The letter claims Ms. Barry had a romantic relationship for years with former Best Buy Senior Vice President Karl Sanft before she took over as CEO last June. "Best Buy takes allegations of misconduct very seriously," a spokesman told The Wall Street Journal. The Minneapolis company said its board has hired the law firm Sidley Austin LLP to conduct an independent review that is ongoing. "We encourage the letter's author to come forward and be part of that confidential process," the Best Buy spokesman said. "We will not comment further until the review is concluded." Ms. Barry didn't address the allegations and said she is cooperating with the probe. "The Board has my full cooperation and support as it undertakes this review, and I look forward to its resolution in the near term," she said in a statement.

Read more of this story at Slashdot.

16:16

Nextcloud Hub Announced For Offering On-Premises Content Collaboration Platform [Phoronix]

Nearly four years since forking from ownCloud, Nextcloud continues taking on the likes of Dropbox, Google Docs, and Microsoft 365 -- especially more so now with their introduction of Nextcloud Hub. Nextcloud Hub is a completely integrated on-premises content collaboration platform...

16:03

DigitalOcean Is Laying Off Staff [Slashdot]

Cloud infrastructure provider DigitalOcean announced a round of layoffs, with potentially between 30 and 50 people affected. TechCrunch reports: DigitalOcean has confirmed the news with the following statement: "DigitalOcean recently announced a restructuring to better align its teams to its go-forward growth strategy. As part of this restructuring, some roles were, unfortunately, eliminated. DigitalOcean continues to be a high-growth business with $275M in [annual recurring revenues] and more than 500,000 customers globally. Under this new organizational structure, we are positioned to accelerate profitable growth by continuing to serve developers and entrepreneurs around the world." Before the confirmation was sent to us this morning, a number of footprints began to emerge last night, when the layoffs first hit, with people on Twitter talking about it, some announcing that they are looking for new opportunities and some offering help to those impacted. Inbound tips that we received estimate the cuts at between 30 and 50 people. With around 500 employees (an estimate on PitchBook), that would work out to up to 10% of staff affected.

Read more of this story at Slashdot.

15:45

Disney Drops 'Fox' Name, Will Rebrand As 20th Century Studios [Slashdot]

An anonymous reader quotes a report from Variety: In a move at once unsurprising and highly symbolic, the Walt Disney Company is dropping the "Fox" brand from the 21st Century Fox assets it acquired last March, Variety has learned. The 20th Century Fox film studio will become 20th Century Studios, and Fox Searchlight Pictures will become simply Searchlight Pictures. On the TV side, however, no final decisions have been made about adjusting the monikers of production units 20th Century Fox Television and Fox 21 Television Studios. Discussions about a possible name change are underway, but no consensus has emerged, according to a source close to the situation. Disney has already started the process to phase out the Fox name: Email addresses have changed for Searchlight staffers, with the fox.com address replaced with a searchlightpictures.com address. On the poster for Searchlight's next film "Downhill," with Julia Louis-Dreyfus and Will Ferrell, the credits begin with "Searchlight Pictures Presents." The film will be the first Searchlight release to debut with the new logo. "Call of the Wild," an upcoming family film, will be released under the 20th Century banner, sans Fox. Those logos won't be dramatically altered, just updated. The most notable change is that the word "Fox" has been removed from the logo marks. Otherwise, the signature elements -- swirling klieg lights, monolith, triumphal fanfare -- will remain the same.

Read more of this story at Slashdot.

15:29

Wine 5.0-RC6 Released With Another 21 Fixes [Phoronix]

We'll likely see the Wine 5.0 stable release next week or the following week, but for now Wine 5.0-RC6 is available as the newest weekly release candidate...

15:05

Xiaomi Spins Off POCO as an Independent Company [Slashdot]

Xiaomi said today it is spinning off POCO, a sub-smartphone brand it created in 2018, as a standalone company that will now run independently of the Chinese electronics giant and make its own market strategy. From a report: The move comes months after a top POCO executive -- Jai Mani, a former Googler -- and some other founding and core members left the sub-brand. The company today insisted that POCO F1, the only smartphone to be launched under the POCO brand, remains a "successful" handset. The POCO F1, a $300 smartphone, was launched in 50 markets. Xiaomi created the POCO brand to launch high-end, premium smartphones that would compete directly with flagship smartphones of OnePlus and Samsung. In an interview in 2018, Alvin Tse, the head of POCO, and Mani, said that they were working on a number of smartphones and were also thinking about other gadget categories. At the time, the company had 300 people working on POCO, and they "shared resources" with the parent firm.

Read more of this story at Slashdot.

14:29

Europe mulls five year ban on facial recognition in public... with loopholes for security and research [The Register]

Euro Commission also wants to loosen purse strings for AI investment while tightening reins

The European Commission is weighing whether to ban facial recognition systems in public areas for up to five years, according to a draft report on artificial intelligence policy in the European Union.…

14:25

How Just Four Satellites Could Provide Worldwide Internet [Slashdot]

We've known since the 1980s that you don't need mega-constellations comprising thousands of satellites to provide global internet coverage to the world. Continuous worldwide coverage is possible with a constellation of just four satellites placed at much higher altitudes. So why don't we have that? The big obstacle is cost. Several factors work to degrade a satellite's orbit, and to combat them, you need a huge amount of propellant on the satellite to consistently stabilize its orbit. Manufacturing, launch, and operational costs are just too high for the four-satellite trick. An anonymous reader writes: A new study proposes a counterintuitive approach that turns these degrading forces into ones that actually help keep these satellites in orbit. Instead of elliptical, the satellites' orbits would be circular, letting them get by with less fuel while still providing nearly global coverage (at slower speeds). The team ran simulations and found two that would work -- but there are still too many other issues for it to ever happen.

Read more of this story at Slashdot.

13:45

Every Place is the Same Now [Slashdot]

With a phone, anywhere else is always just a tap away. From a column: Those old enough to remember video-rental stores will recall the crippling indecision that would overtake you while browsing their shelves. With so many options, any one seemed unappealing, or insufficient. In a group, different tastes or momentary preferences felt impossible to balance. Everything was there, so there was nothing to watch. Those days are over, but the shilly-shally of choosing a show or movie to watch has only gotten worse. First, cable offered hundreds of channels. Now, each streaming service requires viewers to manipulate distinct software on different devices, scanning through the interfaces on Hulu, on Netflix, on AppleTV+ to find something "worth watching." Blockbuster is dead, but the emotional dread of its aisles lives on in your bedroom. This same pattern has been repeated for countless activities, in work as much as leisure. Anywhere has become as good as anywhere else. The office is a suitable place for tapping out emails, but so is the bed, or the toilet. You can watch television in the den -- but also in the car, or at the coffee shop, turning those spaces into impromptu theaters. Grocery shopping can be done via an app while waiting for the kids' recital to start. Habits like these compress time, but they also transform space. Nowhere feels especially remarkable, and every place adopts the pleasures and burdens of every other. It's possible to do so much from home, so why leave at all?

Read more of this story at Slashdot.

13:06

NPD's Best-Selling Games of the Decade Charts 'Call of Duty' Domination [Slashdot]

The NPD group has rounded up sales stats for the last month, but with the flip from 2019 to 2020 it is also listing some of the best sellers over the last ten years. From a report: Grand Theft Auto V is the best selling game across all platforms and outlets tracked from 2010 through the end of 2019, but otherwise the top ten is dominated by the Call of Duty series, with Red Dead Redemption at number 7 and Minecraft at number 10 as the only other titles. 1. Grand Theft Auto V 2. Call of Duty: Black Ops 3. Call of Duty: Black Ops II 4. Call Of Duty: Modern Warfare 3 5. Call of Duty: Black Ops III

Read more of this story at Slashdot.

12:49

'Friendly' hackers are seemingly fixing the Citrix server hole – and leaving a nasty present behind [The Register]

Congratulations, you've won a secret backdoor

Hackers exploiting the high-profile Citrix CVE-2019-19781 flaw to compromise VPN gateways are now patching the servers to keep others out.…

12:26

A Hacker is Patching Citrix Servers To Maintain Exclusive Access [Slashdot]

Catalin Cimpanu, writing for ZDNet: Attacks on Citrix appliances have intensified this week, and multiple threat actors have now joined in and are launching attacks in the hopes of compromising a high-value target, such as a corporate network, government server, or public institution. In a report published today, FireEye says that among all the attack noise it's been keeping an eye on for the past week, it spotted one attacker that stuck out like a sore thumb. This particular threat actor was attacking Citrix servers from behind a Tor node, and deploying a new payload the FireEye team named NotRobin. FireEye says NotRobin had a dual purpose. First, it served as a backdoor into the breached Citrix appliance. Second, it worked similar to an antivirus by removing other malware found on the device and preventing other attackers from dropping new payloads on the vulnerable Citrix host. It is unclear if the NotRobin attacker is a good guy or a bad guy, as there was no additional malware deployed on the compromised Citrix systems beyond the NotRobin payload. However, FireEye experts are leaning toward the bad guy classification. In their report, they say they believe this actor may be "quietly collecting access to NetScaler devices for a subsequent campaign."

Read more of this story at Slashdot.

11:45

Teaching Assistants Say They've Won Millions From UC Berkeley [Slashdot]

The university underemployed more than 1,000 students -- primarily undergraduates in computer science and engineering -- in order to avoid paying union benefits, UAW Local 2865 says. From a report: The University of California at Berkeley owes student workers $5 million in back pay, a third-party arbitrator ruled on Monday, teaching assistants at the university say. More than 1,000 students -- primarily undergraduates in Berkeley's electrical engineering and computer science department -- are eligible for compensation, the United Auto Workers (UAW) Local 2865, which represents 19,000 student workers in the University of California system, told Motherboard. In some cases, individual students will receive around $7,500 per term, the union says. "This victory means that the university cannot get away with a transparent erosion of labor rights guaranteed under our contract," Nathan Kenshur, head steward of UAW Local 2865 and a third-year undergraduate math major at Berkeley, told Motherboard. Thanks to their union contract, students working 10 hours a week or more at Berkeley are entitled to a full waiver of their in-state tuition fees, $150 in campus fees each semester, and childcare benefits. (Graduate students also receive free healthcare.) But in recent years, Berkeley has avoided paying for these benefits, according to UAW Local 2865. Instead, the university has hired hundreds of students as teaching assistants with appointments of less than 10 hours a week. On Monday, an arbitrator agreed upon by the UAW and the university ruled that Berkeley had intentionally avoided paying its student employees' benefits by hiring part-time workers. It ordered the university to pay the full tuition amount for students who worked these appointments between fall 2017 and today, a press release from the union says.

Read more of this story at Slashdot.

11:05

Climate Models Are Getting Future Warming Projections [Slashdot]

Alan Buis of NASA's Jet Propulsion Laboratory, writes: There's an old saying that "the proof is in the pudding," meaning that you can only truly gauge the quality of something once it's been put to a test. Such is the case with climate models: mathematical computer simulations of the various factors that interact to affect Earth's climate, such as our atmosphere, ocean, ice, land surface and the Sun. For decades, people have legitimately wondered how well climate models perform in predicting future climate conditions. Based on solid physics and the best understanding of the Earth system available, they skillfully reproduce observed data. Nevertheless, they have a wide response to increasing carbon dioxide levels, and many uncertainties remain in the details. The hallmark of good science, however, is the ability to make testable predictions, and climate models have been making predictions since the 1970s. How reliable have they been? Now a new evaluation of global climate models used to project Earth's future global average surface temperatures over the past half-century answers that question: most of the models have been quite accurate. In a study accepted for publication in the journal Geophysical Research Letters, a research team led by Zeke Hausfather of the University of California, Berkeley, conducted a systematic evaluation of the performance of past climate models. The team compared 17 increasingly sophisticated model projections of global average temperature developed between 1970 and 2007, including some originally developed by NASA, with actual changes in global temperature observed through the end of 2017. The observational temperature data came from multiple sources, including NASA's Goddard Institute for Space Studies Surface Temperature Analysis (GISTEMP) time series, an estimate of global surface temperature change. The results: 10 of the model projections closely matched observations. Moreover, after accounting for differences between modeled and actual changes in atmospheric carbon dioxide and other factors that drive climate, the number increased to 14. The authors found no evidence that the climate models evaluated either systematically overestimated or underestimated warming over the period of their projections.

Read more of this story at Slashdot.

10:56

Fedora CoreOS Now Deemed Production Ready For Containerized Workload Experience [Phoronix]

Fedora CoreOS has graduated out of its preview state and is now considered ready for general use...

10:24

Toshiba Touts Algorithm That's Faster Than a Supercomputer [Slashdot]

It's a tantalizing prospect for traders whose success often hinges on microseconds: a desktop PC algorithm that crunches market data faster than today's most advanced supercomputers. Japan's Toshiba says it has the technology to make such rapid-fire calculations a reality -- not quite quantum computing, but perhaps the next best thing. From a report: The claim is being met with a mix of intrigue and skepticism at financial firms in Tokyo and around the world. Toshiba's "Simulated Bifurcation Algorithm" is designed to harness the principles behind quantum computers without requiring the use of such machines, which currently have limited applications and can cost millions of dollars to build and keep near absolute zero temperature. Toshiba says its technology, which may also have uses outside finance, runs on PCs made from off-the-shelf components. "You can just plug it into a server and run it at room temperature," Kosuke Tatsumura, a senior research scientist at Toshiba's Computer & Network Systems Laboratory, said in an interview. The Tokyo-based conglomerate, while best known for its consumer electronics and nuclear reactors, has long conducted research into advanced technologies. Toshiba has said it needs a partner to adopt the algorithm for real-world use, and financial firms have taken notice as they grapple for an edge in markets increasingly dominated by machines. Banks, brokerages and asset managers have all been experimenting with quantum computing, although viable applications are generally considered to be some time away.

Read more of this story at Slashdot.

10:19

Saturday Morning Breakfast Cereal - Self [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you want to find out if someone truly thinks We Are All One, ask them about a politician they dislike.


Today's News:

09:46

Ryzen 9 3900X vs. Ryzen 9 3950X vs. Core i9 9900KS In Nearly 150 Benchmarks [Phoronix]

This week our AMD Ryzen 9 3950X review sample finally arrived and so we've begun putting it through the paces of many different benchmarks. The first of these Linux tests with the Ryzen 9 3950X is looking at the performance up against the Ryzen 9 3900X and Intel Core i9 9900KS in 149 different tests.

09:46

FBI: Nation-State Actors Have Breached Two US Municipalities [Slashdot]

Nation-state hackers breached the networks of two US municipalities last year, the FBI said in a security alert sent to private industry partners last week. An anonymous reader writes: The hacks took place after attackers used the CVE-2019-0604 vulnerability in Microsoft SharePoint servers to breach the two municipalities' networks. The FBI says that once attackers got a foothold on these networks, "malicious activities included exfiltration of user information, escalation of administrative privileges, and the dropping of webshells for remote/backdoor persistent access." "Due to the sophistication of the compromise and Tactics, Techniques, and Procedures (TTPs) utilized, the FBI believes unidentified nation-state actors are involved in the compromise," the agency said in its security alert. The FBI could not say if both intrusions were carried out by the same group. The agency also did not name the two hacked municipalities; however, it reported the two breaches in greater detail, listing the attackers' steps in each incident.

Read more of this story at Slashdot.

09:30

Who says HMRC hasn't got a sense of humour? Er, 65 million Brits [The Register]

I missed my Self Assessment filing deadline because.... a rundown of the worst excuses

Brits’ favourite government department, Her Majesty’s Revenues & Customs, has released a listicle of the most bizarre excuses people have given for missing the Self Assessment tax returns deadline, along with the weirdest biz expense claims…

09:06

Biden Wants To Get Rid of Law That Shields Companies like Facebook From Liability For What Their Users Post [Slashdot]

Democratic presidential candidate Joe Biden wants to get rid of the legal protection that has shielded social media companies including Facebook from liability for users' posts. From a report: The former vice president's stance, presented in an interview with The New York Times editorial board, is more extreme than that of other lawmakers who have confronted tech executives about the legal protection from Section 230 of the Communications Decency Act. "Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms," Biden said in the interview published Friday. The bill became law in the mid-1990s to help still-nascent tech firms avoid being bogged down in legal battles. But as tech companies have amassed more power and billions of dollars, many lawmakers across the political spectrum along with Attorney General William Barr, agree that some reforms of the law and its enforcement are likely warranted. But revoking the clause in its entirety would have major implications for tech platforms and may still fail to produce some of the desired outcomes. Section 230 allows for tech companies to take "good faith" measures to moderate content on their platforms, meaning they can take down content they consider violent, obscene or harassing without fear of legal retribution.

Read more of this story at Slashdot.

08:57

EU declares it'll Make USB-C Great Again™. You hear that, Apple? [The Register]

Bloc to make not-quite-universal connector universal within its bounds

The EU plans to force manufacturers to use a common connector – the happily symmetrical USB-C – for all mobes, fondleslabs, e-readers and similar electronic tat.…

08:25

EU Mulls Five-Year Ban on Facial Recognition Tech in Public Areas [Slashdot]

The European Union is considering banning facial recognition technology in public areas for up to five years, to give it time to work out how to prevent abuses. From a report: The plan by the EU's executive -- set out in an 18-page white paper -- comes amid a global debate about the systems driven by artificial intelligence and widely used by law enforcement agencies. The EU Commission said new tough rules may have to be introduced to bolster existing regulations protecting Europeans' privacy and data rights. "Building on these existing provisions, the future regulatory framework could go further and include a time-limited ban on the use of facial recognition technology in public spaces," the EU document said.

Read more of this story at Slashdot.

07:45

Aussie Firefighters Save World's Only Groves Of Prehistoric Wollemi Pines [Slashdot]

As wildfires tear through Australia, a specialized team of firefighters has managed to save hidden groves of the Wollemi pine -- a rare prehistoric tree that outlived the dinosaurs. The trees are so rare, they were thought to be extinct until 1994. From a report: It was a lifesaving mission as dramatic as any in the months-long battle against the wildfires that have torn through the Australian bush. But instead of a race to save humans or animals, a specialized team of Australian firefighters was bent on saving invaluable plant life: hidden groves of the Wollemi pine, a prehistoric tree species that has outlived the dinosaurs. Wollemia nobilis peaked in abundance 34 million to 65 million years ago, before a steady decline. Today, only 200 of the trees exist in their natural environment -- all within the canyons of Wollemi National Park, just 100 miles west of Sydney. The trees are so rare that they were thought to be extinct until 1994. That's the year David Noble, an officer with the New South Wales National Parks and Wildlife Service, rappelled into a narrow canyon and came across a grove of large trees he didn't recognize. Noble brought back a few twigs and showed them to biologists and botanists who were similarly stumped. A month later, Noble returned to the grove with scientists. It was then that they realized what they had found: "a tree outside any existing genus of the ancient Araucariaceae family of conifers," the American Scientist explains.

Read more of this story at Slashdot.

07:31

Stolen creds site WeLeakInfo busted by multinational cop op for data reselling [The Register]

One Irishman and one Dutchman both nicked

Two men have been arrested after Britain’s National Crime Agency and its international pals claimed the takedown of breached credentials-reselling website WeLeakInfo.…

07:07

YouTube's Algorithm is Pushing Climate Misinformation Videos, and Their Creators Are Profiting From It [Slashdot]

An anonymous reader shares a report: When an ad runs on a YouTube video, the video creator generally keeps 55 percent of the ad revenue, with YouTube getting the other 45 percent. This system's designed to compensate content creators for their work. But when those videos contain false information -- say, about climate change -- it's essentially encouraging the creation of more misinformation content. Meanwhile, the brands advertising on YouTube often have no idea where their ads are running. In a new report published today, the social-activism nonprofit Avaaz calculates the degree to which YouTube recommends videos with false information about climate change. After collecting more than 5,000 videos, Avaaz found that 16 percent of the top 100 related videos surfaced by the search term "global warming" contained misinformation. Results were a little better on searches for "climate change" (9 percent) and worse for the more explicitly misinfo-seeking "climate manipulation" (21 percent). Those videos with misinformation had more views and more likes than other videos returned for the same search terms -- by an average of 20 and 90 percent, depending on the search. Avaaz identified 108 different brands running ads on the videos with climate misinformation; ironically enough, about one in five of those ads was from "a green or ethical brand" like Greenpeace or World Wildlife Fund. Many of those and other brands told Avaaz that they were unaware that their ads were running on climate misinformation videos.

Read more of this story at Slashdot.

07:00

ASUS TUF Laptops With Ryzen Are Now Patched To Stop Overheating On Linux [Phoronix]

The AMD Ryzen Linux laptop experience continues improving albeit quite tardy on some elements of the support. In addition to the AMD Sensor Fusion Hub driver finally being released and current/voltage reporting for Zen CPUs on Linux, another step forward in Ryzen mobile support is a fix for ASUS TUF laptops with these processors...

06:34

Time to burst out graphing: Get the Windows Insider experience... by taping a calculator to your monitor [The Register]

Microsoft releases a Windows 10 Fast Ring refresh and previews new calc toys

While 45 years of carbon emissions from Microsoft were being scrutinised by execs last night, the Windows Insider team made an emission of its own, in the form of a fresh Windows 10 build.…

06:00

Oracle Ties Previous All-Time Patch High With January 2020 Updates [Slashdot]

"Not sure if this is good news (Oracle is very busy patching their stuff) or bad news (Oracle is very busy patching their stuff) but this quarterly cycle they tied their all-time high number of vulnerability fixes released," writes Slashdot reader bobthesungeek76036. "And they are urging folks to not drag their feet in deploying these patches." Threatpost reports: The software giant patched 300+ bugs in its quarterly update. Oracle has patched 334 vulnerabilities across all of its product families in its January 2020 quarterly Critical Patch Update (CPU). Out of these, 43 are critical/severe flaws carrying CVSS scores of 9.1 and above. The CPU ties for Oracle's previous all-time high for number of patches issued, in July 2019, which overtook its previous record of 308 in July 2017. The company said in a pre-release announcement that some of the vulnerabilities affect multiple products. "Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply Critical Patch Update patches as soon as possible," it added. "Some of these vulnerabilities were remotely exploitable, not requiring any login data; therefore posing an extremely high risk of exposure," said Boris Cipot, senior security engineer at Synopsys, speaking to Threatpost. "Additionally, there were database, system-level, Java and virtualization patches within the scope of this update. These are all critical elements within a company's infrastructure, and for this reason the update should be considered mandatory. At the same time, organizations need to take into account the impact that this update could have on their systems, scheduling downtime accordingly."

Read more of this story at Slashdot.

05:29

AMD Sends In A Bunch Of Fixes For Linux 5.6 Along With Pollock Support [Phoronix]

After already several rounds of feature work queued in DRM-Next for Linux 5.6, AMD has submitted a final batch of feature work for this next kernel as it concerns their AMDGPU graphics driver...

05:11

Autonomous Logistics Information System gets shoved off the F-35 gravy train in favour of ODIN [The Register]

Snafu-ridden maintenance software behemoth to be replaced

The US military is dumping its Autonomous Logistics Information System (ALIS) in favour of ODIN as it tries to break with the complex past of its ailing F-35 fighter jet maintenance IT suite.…

05:03

Intel Gen7 Graphics Mitigation Will Try To Avoid Performance Loss In Final Version [Phoronix]

Intel's open-source developers working on their security mitigation for the Gen7 graphics hardware have volleyed a new version of the patch series for that mitigation currently causing big hits to Ivybridge / Haswell performance...

04:39

Raspberry Pi 4 V3D Driver Reaches OpenGL ES 3.1 Conformance [Phoronix]

The V3D Gallium3D driver that most notably offers the open-source graphics support for the Raspberry Pi 4 is now an official OpenGL ES 3.1 implementation...

04:39

Microsoft picks a side, aims to make the business 'carbon-negative' by 2030 [The Register]

Plans to cancel out emissions from power consumption since 1975. No word on warming through excessive corporate hot air though

Microsoft has set itself the goal of being "carbon-negative" by 2030, nailing its colours to a so-called "moonshot" for worldwide removal and reduction of carbon.…

03:40

Fedora CoreOS out of preview [Fedora Magazine]

The Fedora CoreOS team is pleased to announce that Fedora CoreOS is now available for general use. Here are some more details about this exciting delivery.

Fedora CoreOS is a new Fedora Edition built specifically for running containerized workloads securely and at scale. It’s the successor to both Fedora Atomic Host and CoreOS Container Linux and is part of our effort to explore new ways of assembling and updating an OS. Fedora CoreOS combines the provisioning tools and automatic update model of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.  For more on the Fedora CoreOS philosophy, goals, and design, see the announcement of the preview release.

Some highlights of the current Fedora CoreOS release:

  • Automatic updates, with staged deployments and phased rollouts
  • Built from Fedora 31, featuring:
    • Linux 5.4
    • systemd 243
    • Ignition 2.1
  • OCI and Docker Container support via Podman 1.7 and Moby 18.09
  • cgroups v1 enabled by default for broader compatibility; cgroups v2 available via configuration

Fedora CoreOS is available on a variety of platforms:

  • Bare metal, QEMU, OpenStack, and VMware
  • Images available in all public AWS regions
  • Downloadable cloud images for Alibaba, AWS, Azure, and GCP
  • Can run live from RAM via ISO and PXE (netboot) images

Fedora CoreOS is under active development.  Planned future enhancements include:

  • Addition of the next release stream for extended testing of upcoming Fedora releases.
  • Support for additional cloud and virtualization platforms, and processor architectures other than x86_64.
  • Closer integration with Kubernetes distributions, including OKD.
  • Aggregate statistics collection.
  • Additional documentation.

Where do I get it?

To try out the new release, head over to the download page to get OS images or cloud image IDs.  Then use the quick start guide to get a machine running quickly.

How do I get involved?

It’s easy!  You can report bugs and missing features to the issue tracker. You can also discuss Fedora CoreOS in Fedora Discourse, the development mailing list, in #fedora-coreos on Freenode, or at our weekly IRC meetings.

Are there stability guarantees?

In general, the Fedora Project does not make any guarantees around stability.  While Fedora CoreOS strives for a high level of stability, this can be challenging to achieve in the rapidly evolving Linux and container ecosystems.  We’ve found that the incremental, exploratory, forward-looking development required for Fedora CoreOS — which is also a cornerstone of the Fedora Project as a whole — is difficult to reconcile with the iron-clad stability guarantee that ideally exists when automatically updating systems.

We’ll continue to do our best not to break existing systems over time, and to give users the tools to manage the impact of any regressions.  Nevertheless, automatic updates may produce regressions or breaking changes for some use cases. You should make your own decisions about where and how to run Fedora CoreOS based on your risk tolerance, operational needs, and experience with the OS.  We will continue to announce any major planned or unplanned breakage to the coreos-status mailing list, along with recommended mitigations.

How do I migrate from CoreOS Container Linux?

Container Linux machines cannot be migrated in place to Fedora CoreOS.  We recommend writing a new Fedora CoreOS Config to provision Fedora CoreOS machines.  Fedora CoreOS Configs are similar to Container Linux Configs, and must be passed through the Fedora CoreOS Config Transpiler to produce an Ignition config for provisioning a Fedora CoreOS machine.

Whether you’re currently provisioning your Container Linux machines using a Container Linux Config, handwritten Ignition config, or cloud-config, you’ll need to adjust your configs for differences between Container Linux and Fedora CoreOS.  For example, on Fedora CoreOS network configuration is performed with NetworkManager key files instead of systemd-networkd, and time synchronization is performed by chrony rather than systemd-timesyncd.  Initial migration documentation will be available soon and a skeleton list of differences between the two OSes is available in this issue.

CoreOS Container Linux will be maintained for a few more months, and then will be declared end-of-life.  We’ll announce the exact end-of-life date later this month.

How do I migrate from Fedora Atomic Host?

Fedora Atomic Host has already reached end-of-life, and you should migrate to Fedora CoreOS as soon as possible.  We do not recommend in-place migration of Atomic Host machines to Fedora CoreOS. Instead, we recommend writing a Fedora CoreOS Config and using it to provision new Fedora CoreOS machines.  As with CoreOS Container Linux, you’ll need to adjust your existing cloud-configs for differences between Fedora Atomic Host and Fedora CoreOS.

Welcome to Fedora CoreOS.  Deploy it, launch your apps, and let us know what you think!

03:00

Help! I'm trapped on Schrodinger's runaway train! Or am I..? [The Register]

There's an app for that, and it's utter pants

Something for the Weekend, Sir?  Sitting in my tin can far away from home, I marvel that I got here at all.…

03:00

MLB: Use Electronic Surveillance To Capture Fans' Data, Not Opponents' Signs [Slashdot]

theodp writes: Major League Baseball Regulations "prohibit the use of electronic equipment during games and state that no such equipment may be used for the purpose of stealing signs or conveying information designed to give a Club an advantage," reminded MLB Commissioner Rob Manfred Monday as harsh punishment was meted out for the Houston Astros sign stealing scandal. You can read the Commissioner's full statement at MLB.com, after you first carefully review the site's 5,680 word Privacy Policy which, ironically, attempts to describe the many ways that MLB will use electronic surveillance to collect and share information about you and your friends. MLB, a poster child for Google Marketing, boasted recently that the data it collects has enabled it to literally put a price on MLB fans' heads in the form of a Lifetime Value (LTV) metric. "Understanding our fans' budgets allows us to customize the offers and deals we present them," explained the MLB Technology blog. More details are available in Data Science and the Business of Major League Baseball (PDF, Strata Data Conference slides).

Read more of this story at Slashdot.

02:16

WebAssembly: Key to a high-performance web, or ideal for malware? Reg speaks to co-designer Andreas Rossberg [The Register]

State of Wasm: 'Better support for high-level languages', plus interesting cross-platform news

Interview  WebAssembly will not magically speed up your web application and may be as significant running in environments other than web browsers as it is within them, a co-designer of the language told The Register.…

01:34

Unlocking news: We decrypt those cryptic headlines about Scottish cops bypassing smartphone encryption [The Register]

New perspective on FBI, Interpol demands for backdoors

Vid  Police Scotland to roll out encryption bypass technology, as one publication reported this week, causing some Register readers to silently mouth: what the hell?…

01:04

The time that Sales braved the white hot heat of the data centre to save the day [The Register]

It's getting hot in here, so open all your doors

On Call  Welcome to On Call, The Register's regular foray into the increasingly unreliable memories of those who have to pick up the phone when everything is on fire.…

00:00

'Frankenstein' Material Can Self-Heal, Reproduce [Slashdot]

sciencehabit shares a report from Science Magazine: Researchers have now created a form of concrete that not only comes from living creatures but -- given the right inputs -- can turn one brick into two, two into four, and four into eight. [...] For this project, Wil Srubar, a materials scientist at the University of Colorado, Boulder, and his colleagues wanted to engineer life into a bulk structural material. To do so, they turned to a hearty photosynthetic cyanobacterial species in the genus Synechococcus. They mixed the cyanobacterium with sand and a hydrogel that helped retain water and nutrients. The mix provided structural support to the bacteria, which -- as they grey -- lay down calcium carbonate, similar to the way some ocean creatures create shells. When dried, the resulting material was as strong as cement-based mortar. "It looks like a Frankenstein-type material," Srubar says. "That's exactly what we're trying to create, something that stays alive." Under the right conditions, which included relatively high humidity, Srubar's living material not only survived but reproduced. After the researchers split the original brick in half and added extra sand, hydrogel, and nutrients, the cyanobacteria grew in 6 hours into two full-size bricks; after three generations (in which the researchers again split the bricks), they had eight bricks, they report today in Matter.

Read more of this story at Slashdot.

Thursday, 16 January

23:09

Benchmarks Of Arch Linux's Zen Kernel Flavor [Phoronix]

Following the recent Linux kernel tests of Liquorix and other scheduler discussions (and more), some requests from premium supporters rolled in for seeing the performance of Arch Linux's Zen kernel package against the generic kernel. Here are those benchmark results...

23:01

Nowhere to run to, nowhere to hide, muaha... Boffins build laser-eyed intelligent cam that sorta sees around corners [The Register]

Guess we can't escape our future Terminator overlords

Artificial intelligence with frickin' lasers beams attached can see objects hidden around corners, according to a study published in the journal Optica on Thursday.…

22:00

Rav1e Kicks Off 2020 With Speed Improvements For Rust-Based AV1 Encoding [Phoronix]

Xiph.org's Rustlang-written "Rav1e" AV1 video encoder is back on track with delivering weekly pre-releases after missing them over the past month due to the holidays. With Rav1e p20200115 are not only performance improvements but also binary side and build speed enhancements...

20:30

Some Hospitals Are Ditching Lead Aprons During X-Rays [Slashdot]

pgmrdlm shares a report from ABC News: Some hospitals are ditching the ritual of covering reproductive organs and fetuses during imaging exams after prominent medical and scientific groups have said it's a feel-good measure that can impair the quality of diagnostic tests and sometimes inadvertently increase a patient's radiation exposure. The about-face is intended to improve care, but it will require a major effort to reassure regulators, health care workers and the public that it's better not to shield. Lead shields are difficult to position accurately, so they often miss the target area they are supposed to protect. Even when in the right place, they can inadvertently obscure areas of the body a doctor needs to see -- the location of a swallowed object, say -- resulting in a need to repeat the imaging process, according to the American Association of Physicists in Medicine, which represents physicists who work in hospitals. Shields can also cause automatic exposure controls on an X-ray machine to increase radiation to all parts of the body being examined in an effort to "see through" the lead. Moreover, shielding doesn't protect against the greatest radiation effect: "scatter," which occurs when radiation ricochets inside the body, including under the shield, and eventually deposits its energy in tissues. "In April, the physicists' association recommended that shielding of patients be 'discontinued as routine practice,'" the report adds. "Its statement was endorsed by several groups, including the American College of Radiology and the Image Gently Alliance, which promotes safe pediatric imaging. However, experts continue to recommend that health care workers in the imaging area protect themselves with leaded barriers as a matter of occupational safety."

Read more of this story at Slashdot.

18:58

Copy-left behind: Permissive MIT, Apache open-source licenses on the up as developers snub GNU's GPL [The Register]

Share all our code modifications with others? Think again, hippie

Permissive open-source software licenses continue to gain popularity at the expense of copyleft licenses, according to a forthcoming report from WhiteSource, a biz that makes software licensing management tools.…

18:30

The Boring Company's Las Vegas Tunnel Is Nearly 50 Percent Complete [Slashdot]

According to the Las Vegas Convention and Visitors Authority, Elon Musk's Boring Company is now about 50% complete with its underground tunnel. It's about six football fields in length. Teslarati reports: The Boring Company officially started tunneling for the people mover after a ceremonial groundbreaking event on November 15. In just two months, the project is nearly halfway completed. The company's Tunnel Boring Machine (TBM) has been 40 feet underground for months and is working to drill two, one-mile-long tunnels. Boring Company began shipping portions of the TBM to the site in Las Vegas in September. Since then, the project has really started to take shape. The project cost the Boring Company $52.5 million and is expected to connect the Las Vegas Convention Center to popular Las Vegas hot spots. Downtown Las Vegas, the Strip, and McCarran International Airport will all be destination options for riders. The mover is expected to transport an estimated 4,400 people per hour. The Las Vegas project is just one of five projects the Boring Company has listed on its website. A Livestream of the Boring Company's Las Vegas Tunnel Project is available for viewing here.

Read more of this story at Slashdot.

18:05

Red Hat Recommends Disabling The Intel Linux Graphics Driver Over Hardware Flaw [Phoronix]

It's been another day testing and investigating CVE-2019-14615, a.k.a. the Intel graphics hardware issue where for Gen9 all turned out to be okay but for Gen7 graphics leads to some big performance hits. Besides the Core i7 tests published yesterday in the aforelinked article, tests on relevant Core i3 and i5 CPUs are currently being carried out for seeing the impact there (so far, it's looking to be equally brutal)...

18:02

Google Cloud rolls out of bed, slips on suit, draws up premium support, vows to take it SLO to lure enterprises [The Register]

Meanwhile, AMD snags Intel exec as server chip boss

Google is hoping to improve the appeal of its mid-tier Cloud platform to enterprises with a new set of support and response options.…

17:50

Augmented Reality In a Contact Lens: It's the Real Deal [Slashdot]

Tekla Perry writes: Startup Mojo Vision announced a microdisplay mid-2019, with not a lot of talk about applications. Turns out, they had one very specific application in mind -- an AR contact lens. Last week the company let selected media have a look at working prototypes, powered wirelessly, though plans for the next version include a battery on board. The demos included edge detection and enhancement (intended for people with low vision) in a darkened room and text annotations. The lenses are entering clinical trials (company executives have been testing them for some time already). Steve Sinclair, senior vice president of product and marketing, says the first application will likely be for people with low vision -- providing real-time edge detection and dropping crisp lines around objects. Other applications include translating languages in real time, tagging faces, and providing emotional cues. "People can't tell you are wearing it, so we want the interaction to be subtle, done using just your eyes," Sinclair said. He also noted the experience is different from wearing glasses. "When you close your eyes, you still see the content displayed," he says. Mojo Vision is calling the technology Invisible Computing.

Read more of this story at Slashdot.

17:10

Dashcam Flaw Allows Anyone To Track Drivers In Real-Time Across the US [Slashdot]

An anonymous reader quotes a report from Motherboard: BlackVue is a dashcam company with its own social network. With a small, internet-connected dashcam installed inside their vehicle, BlackVue users can receive alerts when their camera detects an unusual event such as someone colliding with their parked car. Customers can also allow others to tune into their camera's feed, letting others "vicariously experience the excitement and pleasure of driving all over the world," a message displayed inside the app reads. Users are invited to upload footage of their BlackVue camera spotting people crashing into their cars or other mishaps with the #CaughtOnBlackVue hashtag. But what BlackVue's app doesn't make clear is that it is possible to pull and store users' GPS locations in real-time over days or even weeks. Motherboard was able to track the movements of some of BlackVue's customers in the United States. Ordinarily, BlackVue lets anyone create an account and then view a map of cameras that are broadcasting their location and live feed. This broadcasting is not enabled by default, and users have to select the option to do so when setting up or configuring their own camera. Motherboard tuned into live feeds from users in Hong Kong, China, Russia, the U.K, Germany, and elsewhere. BlackVue spokesperson Jeremie Sinic told Motherboard in an email that the users on the map only represent a tiny fraction of BlackVue's overall customers. But the actual GPS data that drives the map is available and publicly accessible. By reverse engineering the iOS version of the BlackVue app, Motherboard was able to write scripts that pull the GPS location of BlackVue users over a week long period and store the coordinates and other information like the user's unique identifier. One script could collect the location data of every BlackVue user who had mapping enabled on the eastern half of the United States every two minutes. Motherboard collected data on dozens of customers. Following the report, BlackVue said their developers "have updated the security measures" to prevent this sort of tracking. Motherboard confirmed that previously provided user data stopped working, and they said they have "deleted all of the data collected to preserve individuals' privacy."

Read more of this story at Slashdot.

16:54

Mir 1.7 Released With Improvements For Running X11 Software [Phoronix]

Mir 1.7 was released today as the newest feature release for this Ubuntu-focused display stack that for the past two years now has focused on serving viable Wayland support...

16:30

US States Tell Court Prices To Increase If Sprint, T-Mobile Allowed To Merge [Slashdot]

A group of U.S. states suing to block T-Mobile from merging with Sprint on Wednesday told a federal judge that the deal would violate antitrust laws and raise wireless prices for consumers. Reuters reports: The states filed a lawsuit in June to block the merger, saying it would harm low-income Americans in particular. T-Mobile and Sprint contend that the merger would enable the combined company to compete more effectively with dominant carriers Verizon and AT&T. U.S. District Court Judge Victor Marrero, who presided over a two-week trial last month in federal court in Manhattan, began hearing closing arguments in the case on Wednesday. "I'm here speaking on behalf of 130 million consumers who live in these states," Glenn Pomerantz, a lawyer for the states, said at the outset of his argument. "If this merger goes forward, they're at risk for paying billions of dollars more every single year for those services." When T-Mobile majority shareholder Deutsche Telekom first contemplated the deal in 2010, it "expressly and unambiguously admitted that it had potential to reduce price competition," Pomerantz said. The states also emphasized that the carriers did not need a merger to introduce previous generations of wireless technology, and Pomerantz argued that T-Mobile would continue to acquire spectrum, or airwaves that carry data, from a variety of sources even if the merger was blocked.

Read more of this story at Slashdot.

16:13

Bad news: Windows security cert SNAFU exploits are all over the web now. Also bad: Citrix gateway hole mitigations don't work for older kit [The Register]

Good news: There is none. Well, apart from you can at least fully patch the Microsoft blunder

Vid  Easy-to-use exploits have emerged online for two high-profile security vulnerabilities, namely the Windows certificate spoofing bug and the Citrix VPN gateway hole. If you haven't taken mitigation steps by now, you're about to have a bad time.…

15:50

Comcast Settles Lying Allegations, Will Issue Refunds and Cancel Debts [Slashdot]

An anonymous reader quotes a report from Ars Technica: Comcast has agreed to issue refunds to 15,600 customers and cancel the debts of another 16,000 people to settle allegations that the cable company lied to customers in order to hide the true cost of service. Comcast will have to pay $1.3 million in refunds. The settlement with Minnesota Attorney General Keith Ellison, announced yesterday, resolves a lawsuit filed by the state against Comcast in December 2018. The attorney general's lawsuit alleged that Comcast "charged Minnesota consumers more than it promised it would for their cable services, including undisclosed 'fees' that the company used to bolster its profits, and that it charged for services and equipment that customers did not request," the settlement announcement said. Comcast also "promised [customers] prepaid gift cards as an inducement to enter into multi-year contracts, then failed to provide the cards," Minnesota alleged. Refunds to the 15,600 customers will total $1.14 million. Comcast must also pay another $160,000 to the state attorney general's office, which can use any or all of that amount to provide additional refunds. That brings the total amount Comcast will pay to $1.3 million.

Read more of this story at Slashdot.

15:30

PinePhone Linux Smartphone Shipment Finally Begins [Slashdot]

Pine64 will finally start shipping the pre-order units of PinePhone Braveheart Edition on January 17, 2020. Fossbytes reports: A year ago, PinePhone was made available only to developers and hackers. After getting better responses and suggestions, the Pine64 developers planned to bring Pinephone for everyone. In November last year, pre-orders for PinePhone Braveheart Edition commenced for everyone. But due to manufacturing issues coming in the way, the shipment date slipped for weeks, which was scheduled in December last year. PinePhone Braveheart Edition is an affordable, open source Linux-based operating system smartphone preloaded with factory test image running on Linux OS (postmarketOS) on inbuilt storage. You can check on PinePhone Wiki to find the PinePhone compatible operating system such as Ubuntu Touch, postmarketOS, or Sailfish OS, which you can boot either from internal storage or an SD card.

Read more of this story at Slashdot.

15:10

Google Stadia Promises More Than 120 Games in 2020, Including 10 Exclusives [Slashdot]

Google said today that it's on track to bring more than 120 games to its cloud gaming service Stadia in 2020 and is planning to offer more than 10 Stadia-exclusive games for the first half of the year. From a report: That would be a pretty massive jump from the 26 games and one exclusive that are currently available, and all in a little more than a year after the service's launch, if those projections hold true. Previously, Google had only explicitly confirmed four games for 2020, so this news was much needed to let early adopters know there are a lot more games on the way. Google also announced other updates rolling out to Stadia over the next three months, including 4K gaming on the web, support for more Android phones (it's currently only available on Google's Pixels), wireless gameplay on the web through the Stadia controller (you currently have to plug in a cable), and "further [Google] Assistant functionality" when playing Stadia through a browser. We're asking Google for more details -- and we're particularly curious whether any of the new exclusive games are the kind that are only possible with the power of the cloud. The company said in October that it's building out a few first-party studios to eventually make that a reality.

Read more of this story at Slashdot.

14:30

Scientists Sent Mighty Mice To Space To Improve Treatments Back On Earth [Slashdot]

In December, scientists sent 40 very muscular mice to live temporarily at the International Space Station. The resulting research, they hope, could lead to new treatments for kids with muscular dystrophy, or cancer patients with muscle wasting. From a report: In early December at the Kennedy Space Center in Florida, two anxious scientists were about to send 20 years of research into orbit. "I feel like our heart and soul is going up in that thing," Dr. Emily Germain-Lee told her husband, Dr. Se-Jin Lee, as they waited arm-in-arm for a SpaceX rocket to launch. A few seconds later the spacecraft took off, transporting some very unusual mice to the International Space Station, where they would spend more than a month in near zero gravity. Ordinarily, that would cause the animals' bones to weaken and their muscles to atrophy. But Lee and Germain-Lee, a power couple in the research world, were hoping that wouldn't happen with these mice. "It was worth waiting 20 years for," Lee said as the Falcon 9 rocket headed toward space. "And someday it may really help people," Germain-Lee added. The couple hope that what they learn from these mice will lead to new treatments for millions of people with conditions that weaken muscles and bones. Among those who might eventually benefit: children with muscular dystrophy or brittle bone disease, cancer patients with muscle wasting, bedridden patients recovering from hip fractures, older people whose bones and muscles have become dangerously weak, and astronauts on long space voyages.

Read more of this story at Slashdot.

14:10

JRR Tolkien's Son Christopher Dies Aged 95 [Slashdot]

Christopher Tolkien, the son of Lord Of The Rings author JRR Tolkien who was responsible for editing and publishing much of his father's work, has died aged 95. The Tolkien Society released a short statement on Twitter to confirm the news. The Guardian reports: Tolkien, who was born in Leeds in 1924, was the third and youngest son of the revered fantasy author and his wife Edith. He grew up listening to his fathers tales of Bilbo Baggins, which later became the children's fantasy novel, The Hobbit. He drew many of the original maps detailing the world of Middle Earth for his father's The Lord of the Rings when the series was first published between 1954 and 55. He also edited much of his father's posthumously published work following his death in 1973. Since 1975 he had lived in France with Baillie. In an interview with the Guardian in 2012, Christopher's son Simon described the enormity of the task after his grandfather died with so much material still unpublished. Simon said: "He had produced this huge output that covered everything from the history of the gods to the history of the people he called the Silmarils -- that was his great work but it had never seen the light of day despite his best efforts to get it published." His son was left to sift through the files and notebooks and over the two decades after his father's death, he published The Silmarillion, Unfinished Tales, Beren And Luthien and The History of Middle-earth, which fleshed out the complex world of elves and dwarves created by his father.

Read more of this story at Slashdot.

13:50

FBI Changes Policy for Notifying States of Election Systems Cyber Breaches [Slashdot]

The Federal Bureau of Investigation will notify state officials when local election systems are believed to have been breached by hackers [the link may be paywalled], a pivot in policy that comes after criticism that the FBI wasn't doing enough to inform states of election threats, WSJ reported Thursday, citing people familiar with the matter. From a report: The FBI's previous policy stated that it notified the direct victims of cyberattacks, such as the counties that own and operate election equipment, but wouldn't necessarily share that information with states. Several states and members of Congress in both parties had criticized that policy as inadequate and one that stifled state-local partnerships on improving election security. Further reading: Despite Election Security Fears, Iowa Caucuses Will Use New Smartphone App.

Read more of this story at Slashdot.

13:23

Remember when Netscout got so upset at 'challenger' label in Gartner Magic Quadrant, it sued? Well, top court just ended all those shenanigans [The Register]

Connecticut Supremes affirm trial judge's decision to toss 'pay to play' claim

Gartner did not defame network app biz Netscout by placing it in the "challenger" section of its Magic Quadrant instead of the "leaders" section, the Supreme Court of the US state of Connecticut has ruled.…

13:10

Microsoft Pledges To Be Carbon Negative By 2030 and Re-capture All of Its Past Emissions [Slashdot]

Microsoft has announced an aggressive plan to rectify its role in the climate crisis. From a report: In a blog post published on Thursday, the company pledged to "reduce and ultimately remove" its carbon footprint. To do that, Microsoft says its operations will be carbon negative by 2030 -- and, it will spend the subsequent two decades sequestering the equivalent of its entire history of carbon dioxide emissions, going back to 1975. Microsoft has already been carbon neutral for several years now, largely by investing in efficient energy practices. It isn't the only company to take these steps; Apple has boasted for some time now about being run on 100 percent renewable energy across the globe and Google says it's been carbon neutral for over a decade. But Microsoft's latest initiative takes all that a leap further. Moving forward, the company says it will be carbon negative, meaning that in addition to prioritizing energy efficiency in its own operations, it will actively work to reduce more atmospheric carbon than it emits. Microsoft is hoping to hit this mark by 2030.

Read more of this story at Slashdot.

12:30

Google Will Wind Down Chrome Apps Starting in June [Slashdot]

Google said this week that it will begin to phase out traditional Chrome apps starting in June, and winding down slowly over two years' time. Chrome extensions, though, will live on. From a report: Google said Tuesday in a blog post that it would stop accepting new Chrome apps in March. Existing apps could continue to be developed through June, 2022. The important dates start in June of this year, when Google will end support for Chrome Apps on the Windows, Mac, and Linux platforms. Education and Enterprise customers on these platforms will get a little more time to get their affairs in order, until December, 2020. Google had actually said four years ago that it would phase out Chrome apps on Windows, Mac, and Linux in 2018. The company appears to have waited longer than announced before beginning this process. The other platform that's affected by this, of course, is Google's own Chrome OS and Chromebooks, for which the apps were originally developed.

Read more of this story at Slashdot.

11:39

Ryzen CPUs On Linux Finally See CCD Temperatures, Current + Voltage Reporting [Phoronix]

One of the few frustrations with the AMD Ryzen CPU support on Linux to date has been besides the often delayed support for CPU temperature reporting has been the mainline kernel not supporting voltage readings and other extra sensors. But that is finally changing with the "k10temp" driver being extended to include current and voltage reporting plus CCD temperature reporting on Zen 2 processors...

11:05

Retooled CentOS Build Scripts To Help Spin New Releases Quicker, More Automation [Phoronix]

The release of CentOS 8 came several months after RHEL 8.0 and this week's release of CentOS updated against RHEL 8.1 took over two months of work. But moving forward to RHEL 8.2 and beyond, that turnaround time will hopefully be less...

09:50

Azure consultant's Google image search results hotlinking sueball booted off the pitch by High Court [The Register]

British copyright law probably wasn't right way to do this one

An Azure consultant has lost his bid to sue Google for copyright infringement over search results that sent web users to a website run by a hotlinker who was displaying one of his photos.…

09:21

09:13

Announcing the Cloudflare Access App Launch [The Cloudflare Blog]

Announcing the Cloudflare Access App Launch
Announcing the Cloudflare Access App Launch

Every person joining your team has the same question on Day One: how do I find and connect to the applications I need to do my job?

Since launch, Cloudflare Access has helped improve how users connect to those applications. When you protect an application with Access, users never have to connect to a private network and never have to deal with a clunky VPN client. Instead, they reach on-premise apps as if they were SaaS tools. Behind the scenes, Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Administrators need about an hour to deploy Access. End user logins take about 20 ms, and that response time is consistent globally. Unlike VPN appliances, Access runs in every data center in Cloudflare’s network in 200 cities around the world. When Access works well, it should be easy for administrators and invisible to the end user.

However, users still need to locate the applications behind Access, and for internally managed applications, traditional dashboards require constant upkeep. As organizations grow, that roster of links keeps expanding. Department leads and IT administrators can create and publish manual lists, but those become a chore to maintain. Teams need to publish custom versions for contractors or partners that only make certain tools visible.

Starting today, teams can use Cloudflare Access to solve that challenge. We’re excited to announce the first feature in Access built specifically for end users: the Access App Launch portal.

The Access App Launch is a dashboard for all the applications protected by Access. Once enabled, end users can login and connect to every app behind Access with a single click.

How does it work?

When administrators secure an application with Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

To check identity, Access relies on the identity provider that the team already uses. Access integrates with providers like OneLogin, Okta, AzureAD, G Suite and others to determine who a user is. If the user has not logged in yet, Access will prompt them to do so at the identity provider configured.

Announcing the Cloudflare Access App Launch

When the user logs in, they are redirected through a subdomain unique to each Access account. Access assigns that subdomain based on a hostname already active in the account. For example, an account with the hostname “widgetcorp.tech” will be assigned “widgetcorp.cloudflareaccess.com”.

Announcing the Cloudflare Access App Launch

The Access App Launch uses the unique subdomain assigned to each Access account. Now, when users visit that URL directly, Cloudflare Access checks their identity and displays only the applications that the user has permission to reach. When a user clicks on an application, they are redirected to the application behind it. Since they are already authenticated, they do not need to login again.

In the background, the Access App Launch decodes and validates the token stored in the cookie on the account’s subdomain.

How is it configured?

The Access App Launch can be configured in the Cloudflare dashboard in three steps. First, navigate to the Access tab in the dashboard. Next, enable the feature in the “App Launch Portal” card. Finally, define who should be able to use the Access App Launch in the modal that appears and click “Save”. Permissions to use the Access App Launch portal do not impact existing Access policies for who can reach protected applications.

Announcing the Cloudflare Access App Launch

Administrators do not need to manually configure each application that appears in the portal. Access App Launch uses the policies already created in the account to generate a page unique to each individual user, automatically.

Defense-in-depth against phishing attacks

Phishing attacks attempt to trick users by masquerading as a legitimate website. In the case of business users, team members think they are visiting an authentic application. Instead, an attacker can present a spoofed version of the application at a URL that looks like the real thing.

Take “example.com” vs “examрle.com” - they look identical, but one uses the Cyrillic “р” and becomes an entirely different hostname. If an attacker can lure a user to visit “examрle.com”, and make the site look like the real thing, that user could accidentally leak credentials or information.

Announcing the Cloudflare Access App Launch

To be successful, the attacker needs to get the victim to visit that fraudulent URL. That frequently happens via email from untrusted senders.

The Access App Launch can help prevent these attacks from targeting internal tools. Teams can instruct users to only navigate to internal applications through the Access App Launch dashboard. When users select a tile in the page, Access will send users to that application using the organization’s SSO.

Cloudflare Gateway can take it one step further. Gateway’s DNS resolver filtering can help defend from phishing attacks that utilize sites that resemble legitimate applications that do not sit behind Access. To learn more about adding Gateway, in conjunction with Access, sign up to join the beta here.

What’s next?

As part of last week’s announcement of Cloudflare for Teams, the Access App Launch is now available to all Access customers today. You can get started with instructions here.

Interested in learning more about Cloudflare for Teams? Read more about the announcement and features here.

09:10

The Curse of macOS Catalina strikes again as AccountEdge stays 32-bit [The Register]

Apple: 'The apps you use every day.' Except that one. And that one. And those are right out

The macOS Catalina bad news train kept on rolling this week as AccountEdge, friend of the Apple-using beancounters, threw in the towel over the forced migration of Macs to a 64-bit world.…

08:35

The $4.3bn trial of the century is over! Now we wait for judgment [The Register]

'A very long haul' says judge as HPE v Lynch and Hussain reaches its end

Autonomy Trial  After 93 days in the courtroom, the $5bn Autonomy Trial has reached its end, with Mike Lynch's lawyers urging the judge to dismiss all of HPE's claims against the British software firm's former CEO.…

08:20

KDE Plasma 5.18 LTS Reaches Beta With Much Better GTK App Integration [Phoronix]

Out this morning is the first beta of KDE Plasma 5.18, which is also the project's first long-term support (LTS) release since Plasma 5.12...

08:10

Load of Big Green for Microsoft: Lloyds Banking Group inks company-wide Managed Desktop deal [The Register]

Bankers ring in 2020 by thwacking employees with the Windows stick

Microsoft and UK finance behemoth Lloyds Banking Group have signed a deal that will see the Windows giant manage the group's desktops and mobile devices.…

07:56

GNU Guile 3.0 Released With JIT Code Generation For Up To 4x Better Performance [Phoronix]

GNU Guile 3.0 has been released, the GNU's implementation of the Scheme programming language with various extra features. The big news with Guile 3.0 is better performance...

07:06

LLVM 10 Adds Option To Help Offset Intel JCC Microcode Performance Impact [Phoronix]

Disclosed back in November was the Intel Jump Conditional Code Erratum that necessitated updated CPU microcode to mitigate and with that came with a nearly across the board performance impact. But Intel developers had been working on assembler patches for helping to reduce that performance hit. The GNU Assembler patches were merged back in December while now ahead of LLVM 10.0 that alternative toolchain has an option for helping to recover some of the lost performance...

07:00

Get hands on with Kubernetes, service meshes – and more – at our fantastic Continuous Lifecycle London conference [The Register]

Dive into an all-day workshop – now at early-bird prices – for practical advice from experts in the field

Event  If you want to get deep into continuous delivery, or get your hands dirty with Kubernetes or Lambda, our Continuous Lifecycle London conference has a workshop for you.…

06:30

Linux 5.6 Crypto Getting AVX/AVX2/AVX-512 Optimized Poly1305 - Helps WireGuard [Phoronix]

Now that lead WireGuard lead developer Jason Donenfeld has managed to get this secure VPN tunnel technology queued for introduction in Linux 5.6 mainline, he's begun optimizing other areas of the kernel for optimal WireGuard performance...

06:21

This is also a system for GPs, right? UK doctors seek clarity over Health dept's £40m single sign-on funding [The Register]

Docs keen to hear how, as promised, project will make their own logins less of a Hancockup

UK doctors' union the British Medical Association (BMA) is seeking clarification on how GPs will access the £40m funding for single sign-on to health systems recently promised by health and social care secretary Matt Hancock.…

05:26

Shhhhhh: Fujitsu bags another £12m from Libraries NI as bosses fail to bookmark replacement [The Register]

Software licensing issues made it harder to turn the page

Libraries Northern Ireland - the public sector organ which, erm, runs libraries in Northern Ireland - has renewed an IT services contract with Fujitsu worth £12m after running out of time to run a tender process.…

04:45

Zhaoxin 7-Series x86 CPUs Mitigated For Spectre V2 + SWAPGS [Phoronix]

When it comes to the Zhaoxin x86-compatible processors coming out of VIA's joint venture in Shanghai, their forthcoming 7-series (KX-7000) has hardware mitigations in place for some CPU vulnerabilities...

04:45

Google reveals new schedule for 'phasing out support for Chrome Apps across all operating systems' [The Register]

June 2020 is the end for users on Windows, Linux and Mac

Google has rolled out a new schedule for ending support for Chrome Apps – packaged desktop applications built with HTML, CSS and JavaScript – in favour of Progressive Web Apps (PWAs) and other browser-based approaches such as Chrome Extensions.…

04:27

Western Digital's Zonefs File-System Looks Like It Could Be Ready To Land With Linux 5.6 [Phoronix]

Introduced last month was Zonefs as a new Linux file-system developed by Western Digital. It's looking like that new file-system could be ready for introduction with the upcoming Linux 5.6 cycle...

03:57

The dream of a single European patent may die next month – and everyone is in denial about it [The Register]

German Constitutional Court is much more dangerous than people think

It has been years in the making and Europe’s largest law firms are smacking their lips in anticipation but the long-held dream of a single European patent system may die next month – and everyone appears to be in denial.…

03:00

Spanking the pirates of corporate security? Try a Plimsoll [The Register]

Execs don't care to keep things shipshape if they don't see a return.... so let's MAKE them

Column  On New Year's Eve 2019, the good ship Travelex struck the iceberg of ransomware. That's not a good metaphor, to be honest: when the SS Titanic hit its frozen nemesis, it had the good taste to unambiguously sink in two hours and 40 minutes. Not so Travelex.…

02:15

Peek inside this fascinating effort to map Britain's subterranean tubes – a deep sprawl of unrecorded infrastructure beneath our feet [The Register]

Join us next month in a cosy pub to hear all about Ordnance Survey's latest project

Register Lecture  A golden age of cartography is upon us. Only this time, it's satellites and tech firms’ vehicles that are crossing the Earth’s surface, compiling maps for their distant masters who are building geospatial services.…

01:00

Attention security startup founders: Give your fledgling Brit biz a boost with Tech Nation’s free Cyber 2.0 school [The Register]

Sign up now: The UK government's scheme to help new companies grow and scale is back

Promo  If you need your new security company to get noticed, Tech Nation’s Cyber programme is back, opening its doors for another cohort of infosec companies looking to scale at speed.…

Wednesday, 15 January

23:54

The mysterious giant blobs of gas around our galaxy's black hole are actually massive merger stars being shredded [The Register]

Yum, long noodle-like stars

Astronomers have finally figured out what the peculiar object known as “G2” orbiting the supermassive black hole at the center of the Milky Way is: a behemoth star created from the merger of two binary stars being stretched by the extreme tidal forces around the black hole.…

23:03

Top Euro court advised: Cops, spies yelling 'national security' isn’t enough to force ISPs to hand over massive piles of people's private data [The Register]

Opinion is preliminary, though a good start

Analysis  In a massive win for privacy rights, the advocate general advising the European Court of Justice (ECJ) has said that national security concerns should not override citizens’ data privacy. Thus, ISPs should not be forced to hand over personal information without clear justification.…

22:28

LLVM Developers Discuss Improved Decision Making Process [Phoronix]

LLVM project founder Chris Lattner has proposed a new decision making process for the LLVM compiler stack around new sub-project proposals, new social policies, changes to core infrastructure, and other key changes...

20:14

Facial-recognition algos vary wildly, US Congress told, as politicians try to come up with new laws on advanced tech [The Register]

Most-accurate algorithms showed 'little to no bias', so nothing to fear, eh?

Vid  A recent US government report investigating the accuracy of facial recognition systems across different demographic groups has sparked fresh questions on how the technology should be regulated.…

19:00

Intel's Mitigation For CVE-2019-14615 Graphics Vulnerability Obliterates Gen7 iGPU Performance [Phoronix]

Yesterday we noted that the Linux kernel picked up a patch mitigating an Intel Gen9 graphics vulnerability. It didn't sound too bad at first but then seeing Ivy Bridge Gen7 and Haswell Gen7.5 graphics are also affected raised eyebrows especially with that requiring a much larger mitigation. Now in testing the performance impact, the current mitigation patches completely wreck the performance of Ivybridge/Haswell graphics performance.

18:54

China tells America, with a straight face, it will absolutely crack down on hacking and copyright, tech blueprint theft [The Register]

Wow, it's all coming up Trump right now, huh?

America and China have struck a deal that may signal the beginning of the end in their ongoing trade war.…

18:02

No Mo'zilla for about 100 techies today: Firefox maker lays off staff as boss talks of 'difficult choices' and funding [The Register]

Enjoy that new version 72? Donate.mozilla.org is a thing, folks

On Wednesday Mozilla Corporation, maker of the Firefox browser and would-be internet privacy protector, said it plans to lay off an undisclosed number of employees.…

17:00

Automated IDOR Discovery through Stateful Swagger Fuzzing [Yelp Engineering and Product Blog]

Scaling security coverage in a growing company is hard. The only way to do this effectively is to empower front-line developers to be able to easily discover, triage, and fix vulnerabilities before they make it to production servers. Today, we’re excited to announce that we’ll be open-sourcing fuzz-lightyear: a testing framework we’ve developed to identify Insecure Direct Object Reference (IDOR) vulnerabilities through stateful Swagger fuzzing, tailored to support an enterprise, microservice architecture. This integrates with our Continuous Integration (CI) pipeline to provide consistent, automatic test coverage as web applications evolve. The Problem As a class of vulnerabilities, IDOR is arguably...

16:54

What do Brit biz consultants and X-rated cam stars have in common? Wide open... AWS S3 buckets on public internet [The Register]

Exposed: Intimate... personal details belonging to thousands of folks

A pair of misconfigured cloud-hosted file silos have left thousands of peoples' sensitive info sitting on the open internet.…

14:33

Yo, sysadmins! Thought Patch Tuesday was big? Oracle says 'hold my Java' with huge 334 security flaw fix bundle [The Register]

House of Larry delivers massive update for 93 products

Oracle has released a sweeping set of security patches across the breadth of its software line.…

13:41

Microsoft's on Edge and you could be, too: Chromium-based browser exits beta – with teething problems [The Register]

Redmond loves Linux so much this Internet-Explorer-replacement is for Windows, macOS only right now

Microsoft's Edge browser, retooled to run on Chromium's open source foundation, has shed its beta designation and entered general release on Wednesday, promising performance, productivity, privacy, and value – a word which here means Microsoft Rewards gift card points for using Bing and access to so-called Premium News.…

13:20

Kubuntu Focus Offers The Most Polished KDE Laptop Experience We've Seen Yet [Phoronix]

As we mentioned back in December, a Kubuntu-powered laptop is launching with the blessing of Canonical and the Kubuntu Community Council. That laptop, the Kubuntu Focus, will begin shipping at the beginning of February while the pre-orders opened today as well as the embargo lift. We've been testing out the Kubuntu Focus the last several weeks and it's quite a polished KDE laptop experience for those wanting to enjoy KDE Plasma for a portable computing experience without having to tweak the laptop for optimal efficiency or other constraints.

12:00

Look sharp: Microsoft Blazor's gone mobile. Fancy developing mobile apps with C# web technology? [The Register]

Going like Blazor(s) everywhere, says Microsoft, but will this enthusiasm last?

Microsoft will provide experimental support for native mobile applications using its Blazor web development platform.…

11:00

Huawei invites app developers to board the HMS Core to gran their pieces of eight [The Register]

£20k 'incentive' up for grabs if you can get something into the App Gallery before the end of Jan

The embattled Chinese networking gear and mobe slinger used its London Developer Conference on Wednesday to lure coders to its HMS (Huawei Mobile Services) platform as a post-Google world beckoned.…

10:07

The eyeopening multi-billion-dollar merry-go-round of Insight Partners, Veeam – and their one-time beau N2WS [The Register]

Plenty of cash flying around ahead of that $5bn biz gobble

Updated  What an interesting world of revolving doors the enterprise storage sector can be sometimes.…

09:52

CentOS-8 1911 Released As Rebuild Off Red Hat Enterprise Linux 8.1 [Phoronix]

CentOS 8 1911 has been released today as the community rebuild rebased to Red Hat Enterprise Linux 8.1 that debuted back in November...

08:23

A fine host for a Raspberry Pi: The Register rakes a talon over the NexDock 2 [The Register]

No Continuum this time, now it's all about the Android

Review  Late, lightweight and looking like a Macbook, the new NexDock has finally arrived. But with the world agog over foldables, is it any good?…

07:56

RenderDoc 1.6 Released, NVIDIA + AMD + Intel All Primed For Vulkan 1.2 [Phoronix]

This morning's release of Vulkan 1.2 is off to a great start...

07:20

It's just semantics: Bulgarian software dev Ontotext squeezes out GraphDB 9.1 [The Register]

Now sing with us: Validation, governance and security

Ontotext, the Bulgarian software developer focused on organisational semantic knowledge, has rolled out an update to its graph database, GraphDB 9.1.…

07:00

Vulkan 1.2 Arrives With An Eye On Greater Performance, Better Compatibility With Other 3D APIs On Top [Phoronix]

Coming up next month already will mark four years since the release of Vulkan 1.0 but for today is an early surprise... Vulkan 1.2! The Khronos Group has prepared Vulkan 1.2 for release as the newest major update to this graphics and compute API. Several vendors also have Vulkan 1.2 support in tow.

06:40

Ex-Autonomy CFO Sushovan Hussain's part in the accounting badness was 'wildly overblown' [The Register]

One-time chief finance suit's legal defence sums up at end of marathon $5bn trial

Autonomy Trial  Key witnesses in the Autonomy Trial testified against Mike Lynch and Sushovan Hussain to save their own skins from US prosecutors, Hussain's barrister told London's High Court.…

06:10

Totally Subcontracted Business: TSB to outsource entire IT estate to IBM for a cool $1bn after 2019 meltdown [The Register]

Big Blue to build and run private cloud

TSB parent, Spain's Banco Sabadell, has signed a €1bn group deal with IBM to build and run its entire banking infrastructure via a private cloud among a raft of other services – the outage-hit UK arm has told The Register.…

05:33

Boeing aircraft sales slump to historic lows after 737 Max annus horribilis [The Register]

This is what happens when you scrimp on software dev, testing and docs

Boeing's deliveries of new airliners have slumped to a reported 11 year low following the 737 Max software flaw which caused two fatal crashes.…

05:30

Introducing Cloudflare for Campaigns [The Cloudflare Blog]

Introducing Cloudflare for Campaigns
Introducing Cloudflare for Campaigns

During the past year, we saw nearly 2 billion global citizens go to the polls to vote in democratic elections. There were major elections in more than 50 countries, including India, Nigeria, and the United Kingdom, as well as elections for the European Parliament. In 2020, we will see a similar number of elections in countries from Peru to Myanmar. In November, U.S citizens will cast their votes for the 46th President, 435 seats in the U.S House of Representatives, 35 of the 100 seats in the U.S. Senate, and many state and local elections.

Recognizing the importance of maintaining public access to election information, Cloudflare launched the Athenian Project in 2017, providing U.S. state and local government entities with the tools needed to secure their election websites for free. As we’ve seen, however, political parties and candidates for office all over the world are also frequent targets for cyberattack. Cybersecurity needs for campaign websites and internal tools are at an all time high.

Although Cloudflare has helped improve the security and performance of political parties and candidates for office all over the world for years, we’ve long felt that we could do more. So today, we’re announcing Cloudflare for Campaigns, a suite of Cloudflare services tailored to campaign needs. Cloudflare for Campaigns is designed to make it easier for all political campaigns and parties, especially those with small teams and limited resources, to get access to cybersecurity services.

Risks faced by political campaigns

Since Russians attempted to use cyberattacks to interfere in the U.S. Presidential election in 2016, the news has been filled with reports of cyber threats against political campaigns, in both the United States and around the world. Hackers targeted the Presidential campaigns of Emmanuel Macron in France and Angela Merkel in Germany with phishing attacks, the main political parties in the UK with DDoS attacks, and congressional campaigns in California with a combination of malware, DDoS attacks and brute force login attempts.

Both because of our services to state and local government election websites through the Athenian Project and because a significant number of political parties and candidates for office use our services, Cloudflare has seen many attacks on election infrastructure and political campaigns firsthand.

During the 2020 U.S. election cycle, Cloudflare has provided services to 18 major presidential campaigns, as well as a range of congressional campaigns. On a typical day, Cloudflare blocks 400,000 attacks against political campaigns, and, on a busy day, Cloudflare blocks more than 40 million attacks against campaigns.

What is Cloudflare for Campaigns?

Cloudflare for Campaigns is a suite of Cloudflare products focused on the needs of political campaigns, particularly smaller campaigns that don’t have the resources to bring significant cybersecurity resources in house. To ensure the security of a campaign website, the Cloudflare for Campaigns package includes Business-level service, as well as security tools particularly helpful for political campaigns websites, such as the web application firewall, rate limiting, load balancing, Enterprise level “I am Under Attack Support”, bot management, and multi-user account enablement.

Introducing Cloudflare for Campaigns

To ensure the security of internal campaign teams, the Cloudflare for Campaigns service will also provide tools for campaigns to ensure the security of their internal teams with Cloudflare Access, allowing for campaigns to secure, authenticate, and monitor user access to any domain, application, or path on Cloudflare, without using a VPN. Along with Access, we will be providing Cloudflare Gateway with DNS-based filtering at multiple locations to protect campaign staff as they navigate the Internet by keeping malicious content off the campaign’s network using DNS filtering, helping prevent users from running into phishing scams or malware sites. Campaigns can use Gateway after the product’s public release.

Cloudflare for Campaigns also includes Cloudflare reliability and security guide, which lists a best practice guide for political campaigns to maintain their campaign site and secure their internal teams.

Regulatory Challenges

Although there is widespread agreement that campaigns and political parties face threats of cyberattack, there is less consensus on how best to get political campaigns the help they need.  Many political campaigns and political parties operate under resource constraints, without the technological capability and financial resources to dedicate to cybersecurity. At the same time, campaigns around the world are the subject of a variety of different regulations intended to prevent corruption of democratic processes. As a practical matter, that means that, although campaigns may not have the resources needed to access cybersecurity services, donation of cybersecurity services to campaigns may not always be allowed.

In the U.S., campaign finance regulations prohibit corporations from providing any contributions of either money or services to federal candidates or political party organizations. These rules prevent companies from offering free or discounted services if those services are not provided on the same terms and conditions to similarly situated members of the general public. The Federal Elections Commission (FEC), which enforces U.S. campaign finance laws, has struggled with the issue of how best to apply those rules to the provision of free or discounted cybersecurity services to campaigns. In consideration of a number of advisory opinions, they have publicly wrestled with the competing priorities of securing campaigns from cyberattack while not opening a backdoor to donation of goods services that are intended to curry favors with particular candidates.

The FEC has issued two advisory opinions to tech companies seeking to provide free or discounted cybersecurity services to campaigns. In 2018, the FEC approved a request by Microsoft to offer a package of enhanced online account security protections for “election-sensitive” users. The FEC reasoned that Microsoft was offering the services to its paid users “based on commercial rather than political considerations, in the ordinary course of its business and not merely for promotional consideration or to generate goodwill.” In July 2019, the FEC approved a request by a cybersecurity company to provide low-cost anti-phishing services to campaigns because those services would be provided in the ordinary course of business and on the same terms and conditions as offered to similarly situated non-political clients.

In September 2018, a month after Microsoft submitted its request, Defending Digital Campaigns (DDC), a nonprofit established with the mission to “secure our democratic campaign process by providing eligible campaigns and political parties, committees, and related organizations with knowledge, training, and resources to defend themselves from cyber threats,” submitted a request to the FEC to offer free or reduced-cost cybersecurity services, including from technology corporations, to federal candidates and parties. Over the following months, the FEC issued and requested comment on multiple draft opinions on whether the donation was permissible and, if so, on what basis. As described by the FEC, to support its position, DDC represented that “federal candidates and parties are singularly ill-equipped to counteract these threats.” The FEC’s advisory opinion to DDC noted:

“You [DDC] state that presidential campaign committees and national party committees require expert guidance on cybersecurity and you contend that the 'vast majority of campaigns' cannot afford full-time cybersecurity staff and that 'even basic cybersecurity consulting software and services' can overextend the budgets of most congressional campaigns. AOR004. For instance, you note that a congressional candidate in California reported a breach to the Federal Bureau of Investigation (FBI) in March of this year but did not have the resources to hire a professional cybersecurity firm to investigate the attack, or to replace infected computers. AOR003.”

In May 2019, the FEC approved DDC’s request to partner with technology companies to provide free and discounted cybersecurity services “[u]nder the unusual and exigent circumstances” presented by the request and “in light of the demonstrated, currently enhanced threat of foreign cyberattacks against party and candidate committees.”

All of these opinions demonstrate the FEC’s desire to allow campaigns to access affordable cybersecurity services because of the heightened threat of cyberattack, while still being cautious to ensure that those services are offered transparently and consistent with the goals of campaign finance laws.

Partnering with DDC to Provide Free Services to US Candidates

We share the view of both DDC and the FEC that political campaigns -- which are central to our democracy -- must have the tools to protect themselves against foreign cyberattack. Cloudflare is therefore excited to announce a new partnership with DDC to provide Cloudflare for Campaigns for free to candidates and parties that meet DDC’s criteria.

Introducing Cloudflare for Campaigns

To receive free services under DDC, political campaigns must meet the following criteria, as the DDC laid out to the FEC:

  • A House candidate’s committee that has at least $50,000 in receipts for the current election cycle, and a Senate candidate’s committee that has at least $100,000 in receipts for the current election cycle;
  • A House or Senate candidate’s committee for candidates who have qualified for the general election ballot in their respective elections; or
  • Any presidential candidate’s committee whose candidate is polling above five percent in national polls.

For more information on eligibility for these services under DDC and the next steps, please visit cloudflare.com/campaigns/usa.

Election package

Although political campaigns are regulated differently all around the world, Cloudflare believes that the integrity of all political campaigns should be protected against powerful adversaries. With this in mind, Cloudflare will therefore also be offering Cloudflare for Campaigns as a paid service, designed to help campaigns all around the world as we attempt to address regulatory hurdles. For more information on how to sign up for the Cloudflare election package, please visit cloudflare.com/campaigns.






05:03

AppSheet. Gesundheit! Oh, we see – it's Google pulling no-code development into a cloudy embrace [The Register]

We'll 'empower millions of citizen developers' says Google. Now where have we heard that before?

Google has cleared the way for non-developers to build applications that make use of Google cloud services with the acquisition of Seattle-based no-code development platform AppSheet confirmed.…

04:30

Behold the Internet of Turf: IoT sucks waste energy from living plants to speak to satellites [The Register]

Surely only a matter of time before the Matrix has you?

Scientists say they have used electricity generated by plant life to power an IoT sensor and send a signal to an overhead satellite.…

04:27

There Is Finally Open-Source Accelerated NVIDIA Turing Graphics Support [Phoronix]

Here is another big feature coming for Linux 5.6: the Nouveau driver will have initial accelerated support for NVIDIA "Turing" GPUs! This is coming at long-last with NVIDIA set to release publicly the Turing firmware images needed for hardware initialization...

04:16

Intel Lands A Final Batch Of Graphics Driver Updates Ahead Of Linux 5.6 [Phoronix]

Intel's open-source graphics driver crew has submitted a final batch of updates to DRM-Next ahead of the Linux 5.6 kernel merge window. The DRM-Next cut-off is this week ahead of the Linux 5.6 window opening up at the start of February...

03:45

One company on the planet, US-based Afilias, meets the criteria to run Colombia's trendy .co registry – and the DNS world fears a stitch-up [The Register]

South American nation's government accused of fixing TLD contract amid bitter business war

Special report  The Colombian government has been accused by its own internet community of fixing a contract so that just one North American company in particular is eligible to operate the .co top-level domain-name registry.…

03:00

Problems at Oracle's DynDNS: Domain registration customers transferred at short notice, nameserver records changed [The Register]

Must have missed Oracle's December memo: 'It is now time that we part ways with this business'

Customers of Oracle's DynDNS who used the service for domain registration - rather than just dynamic DNS - have suffered a sudden involuntary change of registrar, in some cases redirecting websites to those of different companies.…

02:15

Squirrel away a little IT budget for likely Brexit uncertainty, CIOs warned [The Register]

Plus: 'Member when we modelled sales for Remain? Good times – analyst

IT departments should stash away some of their budgets to cope with the likely disruption caused by Brexit - the UK is scheduled to shift to a new trading agreement with the EU and further afield by the end of 2020.…

01:43

A Slew Of ACO Optimizations For The Radeon Vulkan Driver Landed In Mesa 20.0 [Phoronix]

The Valve-backed ACO compiler back-end that is optionally used by the RADV Radeon Vulkan driver has continued growing in popularity with Linux gamers and also has continued maturing a lot for Mesa 20.0 that is due out later this quarter...

01:00

Today's webcast: Hackers don't care if you're big or small. Tune in to find out how to protect your mid-sized biz [The Register]

EDR is an SMB's best friend, says F-Secure

Webcast  We don’t want to spook anyone, but… cyber-criminals have been busy.…

01:00

Develop GUI apps using Flutter on Fedora [Fedora Magazine]

When it comes to app development frameworks, Flutter is the latest and greatest. Google seems to be planning to take over the entire GUI app development world with Flutter, starting with mobile devices, which are already perfectly supported. Flutter allows you to develop cross-platform GUI apps for multiple targets — mobile, web, and desktop — from a single codebase.

This post will go through how to install the Flutter SDK and tools on Fedora, as well as how to use them both for mobile development and web/desktop development.

Installing Flutter and Android SDKs on Fedora

To get started building apps with Flutter, you need to install

  • the Android SDK;
  • the Flutter SDK itself; and,
  • optionally, an IDE and its Flutter plugins.

Installing the Android SDK

Flutter requires the installation of the Android SDK with the entire Android Studio suite of tools. Google provides a tar.gz archive. The Android Studio executable can be found in the android-studio/bin directory and is called studio.sh. To run it, open a terminal, cd into the aforementioned directory, and then run:

$ ./studio.sh

Installing the Flutter SDK

Before you install Flutter you may want to consider what release channel you want to be on.

The stable channel is least likely to give you a headache if you just want to build a mobile app using mainstream Flutter features.

On the other hand, you may want to use the latest features, especially for desktop and web app development. In that case, you might be better off installing either the latest version of the beta or even the dev channel.

Either way, you can switch between channels after you install using the flutter channel command explained later in the article.

Head over to the official SDK archive page and download the latest installation bundle for the release channel most appropriate for your use case.

The installation bundle is simply a xz-compressed tarball (.tar.xz extension). You can extract it wherever you want, given that you add the flutter/bin subdirectory to the PATH environment variable.

Installing the IDE plugins

To install the plugin for Visual Studio Code, you need to search for Flutter in the Extensions tab. Installing it will also install the Dart plugin.

The same will happen when you install the plugin for Android Studio by opening the Settings, then the Plugins tab and installing the Flutter plugin.

Using the Flutter and Android CLI Tools on Fedora

Now that you’ve installed Flutter, here’s how to use the CLI tool.

Upgrading and Maintaining Your Flutter Installations

The flutter doctor command is used to check whether your installation and related tools are complete and don’t require any further action.

For example, the output you may get from flutter doctor right after installing on Fedora is:

Doctor summary (to see all details, run flutter doctor -v):

[✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Linux, locale it_IT.UTF-8)

[!] Android toolchain - develop for Android devices (Android SDK version 29.0.2)

    ✗ Android licenses not accepted.  To resolve this, run: flutter doctor --android-licenses

[!] Android Studio (version 3.5)

    ✗ Flutter plugin not installed; this adds Flutter specific functionality.

    ✗ Dart plugin not installed; this adds Dart specific functionality.

[!] Connected device

    ! No devices available

! Doctor found issues in 3 categories.

Of course the issue with the Android toolchain has to be resolved in order to build for Android. Run this command to accept the licenses:

$ flutter doctor --android-licenses

Use the flutter channel command to switch channels after installation. It’s just like switching branches on Git (and that’s actually what it does). You use it in the following way:

$ flutter channel <channel_name>

…where you’d replace <channel_name> with the release channel you want to switch to.

After doing that, or whenever you feel the need to do it, you need to update your installation. You might consider running this every once in a while or when a major update comes out if you follow Flutter news. Run this command:

$ flutter upgrade

Building for Mobile

You can build for Android very easily: the flutter build command supports it by default, and it allows you to build both APKs and newfangled app bundles.

All you need to do is to create a project with flutter create, which will generate some code for an example app and the necessary android and ios folders.

When you’re done coding you can either run:

  • flutter build apk or flutter build appbundle to generate the necessary app files to distribute, or
  • flutter run to run the app on a connected device or emulator directly.

When you run the app on a phone or emulator with flutter run, you can use the R button on the keyboard to use stateful hot reload. This feature updates what’s displayed on the phone or emulator to reflect the changes you’ve made to the code without requiring a full rebuild.

If you input a capital R character to the debug console, you trigger a hot restart. This restart doesn’t preserve state and is necessary for bigger changes to the app.

If you’re using a GUI IDE, you can trigger a hot reload using the bolt icon button and a hot restart with the typical refresh button.

Building for the Desktop

To build apps for the desktop on Fedora, use the flutter-desktop-embedding repository. The flutter create command doesn’t have templates for desktop Linux apps yet. That repository contains examples of desktop apps and files required to build on desktop, as well as examples of plugins for desktop apps.

To build or run apps for Linux, you also need to be on the master release channel and enable Linux desktop app development. To do this, run:

$ flutter config --enable-linux-desktop

After that, you can use flutter run to run the app on your development workstation directly, or run flutter build linux to build a binary file in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the linux/ directory:

$ flutter create .

Building for the Web

Starting with Flutter 1.12, you can build Web apps using Flutter with the mainline codebase, without having to use the flutter_web forked libraries, but you have to be running on the beta channel.

If you are (you can switch to it using flutter channel beta and flutter upgrade as we’ve seen earlier), you need to enable web development by running flutter config –enable-web.

After doing that, you can run flutter run -d web and a local web server will be started from which you can access your app. The command returns the URL at which the server is listening, including the port number.

You can also run flutter build web to build the static website files in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the web/ directory:

$ flutter create .

Packages for Installing Flutter

Other distributions have packages or community repositories to install and update in a more straightforward and intuitive way. However, at the time of writing, no such thing exists for Flutter. If you have experience packaging RPMs for Fedora, consider contributing to this GitHub repository for this COPR package.

The next step is learning Flutter. You can do that in a number of ways:

  • Read the good API reference documentation on the official site
  • Watching some of the introductory video courses available online
  • Read one of the many books out there today. [Check out the author’s bio for a suggestion! — Ed.]

Photo by Randall Ruiz on Unsplash.

00:07

Packet up, I'll take it: Data-center ops giant Equinix gobbles bare-metal server biz [The Register]

Meanwhile: IBM Power processors to appear in a Google Cloud near you, if you ask nicely

Data-center operator Equinix has agreed to acquire upstart Packet in what it hopes is a move into the edge compute market.…

Tuesday, 14 January

23:09

Wayland 1.18 Planned For Release Next Month [Phoronix]

Without seeing a new release of Wayland itself in nearly one year, a plan has been rolled out for having Wayland 1.18 in mid-February...

23:03

15 years on, Euroboffins finally work out what it took to send the Huygens Titan probe into such a spin [The Register]

Negative torque didn't bring the plucky spacecraft down, thankfully

The European Space Agency’s Huygens probe, the farthest lander to ever make it in the outer solar system, spun wildly in the opposite direction as expected as it descended onto one of Saturn’s moons. Now, scientists have finally figured out what went wrong, 15 years after the probe’s landing.…

22:05

Intel Sends Out Linux Patches For Speed Select Core-Power Controls [Phoronix]

Coming to Linux last year with the 5.3 kernel was Intel Speed Select Technology support as a Cascade Lake feature for optimizing the per-core performance configurations to favor certain cores at the cost of reducing the performance capacity for other CPU cores. That Intel Speed Select (SST) support for Linux is now being enhanced with core-power controls...

20:49

UC Berkeley told to cough up $5m in compensation to comp-sci, engineering students recruited to teach classes [The Register]

Undergrads hired to tutor juniors due to 'rapidly increasing enrollment' – for too little in return

Updated  The University of California, Berkeley is under pressure to cough up more than $5m to reimburse computer-science students who were denied benefits and tuition fee refunds despite working as part-time teaching assistants.…

19:08

IBM, Microsoft, a medley of others sing support for Google against Oracle in Supremes' Java API copyright case [The Register]

Legal war could rest on nineteenth century mapping ruling by past court

With America's Supreme Court expected to hear arguments in Google v. Oracle over the copyrightability of software application programming interfaces come March, the search biz's ideological allies have rushed to support the company with a flurry of filings.…

18:23

Intel Ivybridge + Haswell Require Security Mitigation For Graphics Hardware Flaw [Phoronix]

Earlier today we were first to report on an Intel graphics driver patch mitigating a "Gen9" graphics hardware vulnerability. Details on that new security disclosure are coming to light and it turns out older Intel "Gen" graphics are also affected...

17:31

A New Desktop Theme Is Coming For Ubuntu 20.04 LTS [Phoronix]

With Ubuntu 20.04 to see installation on many desktops (and servers) given its Long-Term Support status, Canonical and the Yaru community team have begun working on a successor to the Yaru theme for this Linux distribution release due out in April...

17:15

Updated your WordPress plugins lately? Here are 320,000 auth-bypassing reasons why you should [The Register]

Another day, another critical set of flaws

A pair of widely used WordPress plugins need to be patched on more than 320,000 websites to close down vulnerabilities that can be exploited to gain admin control of the web publishing software.…

15:26

What can we rid the world of, thinks Google... Poverty? Disease? Yeah, yeah, but first: Third-party cookies – and classic user-agent strings [The Register]

Ad giant chides rivals for encouraging invasive tracking techniques

Analysis  On Tuesday, Google published an update on its Privacy Sandbox proposal, a plan thoroughly panned last summer as a desperate attempt to redefine privacy in a way that's compatible with the ad slinger's business.…

14:48

Valve's Proton 4.11-12 Released With DXVK 1.5.1, Updated SDKs [Phoronix]

The Wine-downstream Proton that powers Valve's Steam Play is up to version 4.11-12 following a release today by a CodeWeavers developer...

13:52

GCC 10 Introduces A Static Analyzer - Static Analysis On C Code With "-fanalyzer" Option [Phoronix]

Within GCC's newly minted Git repository is a big last minute feature for the GCC 10 release: a long-awaited static analyzer...

12:00

Tesla Is Making Use Of The Open-Source Coreboot Within Their Electric Vehicles [Phoronix]

Not only is Linux increasingly used within automobiles but it turns out at least one automobile manufacturer is even using Coreboot within their vehicles...

11:29

Intel's Linux Graphics Driver Gets Patched For A Gen9 Graphics Vulnerability [Phoronix]

On top of the Intel graphics driver patches back from November for denial of service and privilege escalation bugs, the Linux kernel received a new patch today for "CVE-2019-14615" regarding a possible data disclosure with Gen9 graphics hardware...

09:44

Saturday Morning Breakfast Cereal - Mind [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Happily, the man and the robot end up sleeping together later.


Today's News:

09:32

The Time Namespace Appears To Finally Be On-Deck For The Mainline Linux Kernel [Phoronix]

Back in 2018 a time namespace was proposed for the Linux kernel and now in 2020 it looks like this kernel functionality will be merged for mainline, likely with the upcoming Linux 5.6 cycle...

09:07

A cost-effective and extensible testbed for transport protocol development [The Cloudflare Blog]

A cost-effective and extensible testbed for transport protocol development

This was originally published on Perf Planet's 2019 Web Performance Calendar.

At Cloudflare, we develop protocols at multiple layers of the network stack. In the past, we focused on HTTP/1.1, HTTP/2, and TLS 1.3. Now, we are working on QUIC and HTTP/3, which are still in IETF draft, but gaining a lot of interest.

QUIC is a secure and multiplexed transport protocol that aims to perform better than TCP under some network conditions. It is specified in a family of documents: a transport layer which specifies packet format and basic state machine, recovery and congestion control, security based on TLS 1.3, and an HTTP application layer mapping, which is now called HTTP/3.

Let’s focus on the transport and recovery layer first. This layer provides a basis for what is sent on the wire (the packet binary format) and how we send it reliably. It includes how to open the connection, how to handshake a new secure session with the help of TLS, how to send data reliably and how to react when there is packet loss or reordering of packets. Also it includes flow control and congestion control to interact well with other transport protocols in the same network. With confidence in the basic transport and recovery layer,  we can take a look at higher application layers such as HTTP/3.

To develop such a transport protocol, we need multiple stages of the development environment. Since this is a network protocol, it’s best to test in an actual physical network to see how works on the wire. We may start the development using localhost, but after some time we may want to send and receive packets with other hosts. We can build a lab with a couple of virtual machines, using Virtualbox, VMWare or even with Docker. We also have a local testing environment with a Linux VM. But sometimes these have a limited network (localhost only) or are noisy due to other processes in the same host or virtual machines.

Next step is to have a test lab, typically an isolated network focused on protocol analysis only consisting of dedicated x86 hosts. Lab configuration is particularly important for testing various cases - there is no one-size-fits-all scenario for protocol testing. For example, EDGE is still running in production mobile networks but LTE is dominant and 5G deployment is in early stages. WiFi is very common these days. We want to test our protocol in all those environments. Of course, we can't buy every type of machine or have a very expensive network simulator for every type of environment, so using cheap hardware and an open source OS where we can configure similar environments is ideal.

The QUIC Protocol Testing lab

The goal of the QUIC testing lab is to aid transport layer protocol development. To develop a transport protocol we need to have a way to control our network environment and a way to get as many different types of debugging data as possible. Also we need to get metrics for comparison with other protocols in production.

The QUIC Testing Lab has the following goals:

  • Help with multiple transport protocol development: Developing a new transport layer requires many iterations, from building and validating packets as per protocol spec, to making sure everything works fine under moderate load, to very harsh conditions such as low bandwidth and high packet loss. We need a way to run tests with various network conditions reproducibly in order to catch unexpected issues.
  • Debugging multiple transport protocol development: Recording as much debugging info as we can is important for fixing bugs. Looking into packet captures definitely helps but we also need a detailed debugging log of the server and client to understand the what and why for each packet. For example, when a packet is sent, we want to know why. Is this because there is an application which wants to send some data? Or is this a retransmit of data previously known as lost? Or is this a loss probe which is not an actual packet loss but sent to see if the network is lossy?
  • Performance comparison between each protocol: We want to understand the performance of a new protocol by comparison with existing protocols such as TCP, or with a previous version of the protocol under development. Also we want to test with varying parameters such as changing the congestion control mechanism, changing various timeouts, or changing the buffer sizes at various levels of the stack.
  • Finding a bottleneck or errors easily: Running tests we may see an unexpected error - a transfer that timed out, or ended with an error, or a transfer was corrupted at the client side - each test needs to make sure every test is run correctly, by using a checksum of the original file to compare with what is actually downloaded, or by checking various error codes at the protocol of API level.

When we have a test lab with separate hardware, we have benefits, as follows:

  • Can configure the testing lab without public Internet access - safe and quiet.
  • Handy access to hardware and its console for maintenance purpose, or for adding or updating hardware.
  • Try other CPU architectures. For clients we use the Raspberry Pi for regular testing because this is ARM architecture (32bit or 64bit), similar to modern smartphones. So testing with ARM architecture helps for compatibility testing before going into a smartphone OS.
  • We can add a real smartphone for testing, such as Android or iPhone. We can test with WiFi but these devices also support Ethernet, so we can test them with a wired network for better consistency.

Lab Configuration

Here is a diagram of our QUIC Protocol Testing Lab:

A cost-effective and extensible testbed for transport protocol development

This is a conceptual diagram and we need to configure a switch for connecting each machine. Currently, we have Raspberry Pis (2 and 3) as an Origin and a Client. And small Intel x86 boxes for the Traffic Shaper and Edge server plus Ethernet switches for interconnectivity.

  • Origin is simply serving HTTP and HTTPS test objects using a web server. Client may download a file from Origin directly to simulate a download direct from a customer's origin server.
  • Client will download a test object from Origin or Edge, using a different protocol. In typical a configuration Client connects to Edge instead of Origin, so this is to simulate an edge server in the real world. For TCP/HTTP we are using the curl command line client and for QUIC, quiche’s http3_client with some modification.
  • Edge is running Cloudflare's web server to serve HTTP/HTTPS via TCP and also the QUIC protocol using quiche. Edge server is installed with the same Linux kernel used on Cloudflare's production machines in order to have the same low level network stack.
  • Traffic Shaper is sitting between Client and Edge (and Origin), controlling network conditions. Currently we are using FreeBSD and ipfw + dummynet. Traffic shaping can also be done using Linux' netem which provides additional network simulation features.

The goal is to run tests with various network conditions, such as bandwidth, latency and packet loss upstream and downstream. The lab is able to run a plaintext HTTP test but currently our focus of testing is HTTPS over TCP and HTTP/3 over QUIC. Since QUIC is running over UDP, both TCP and UDP traffic need to be controlled.

Test Automation and Visualization

In the lab, we have a script installed in Client, which can run a batch of testing with various configuration parameters - for each test combination, we can define a test configuration, including:

  • Network Condition - Bandwidth, Latency, Packet Loss (upstream and downstream)

For example using netem traffic shaper we can simulate LTE network as below,(RTT=50ms, BW=22Mbps upstream and downstream, with BDP queue size)

$ tc qdisc add dev eth0 root handle 1:0 netem delay 25ms
$ tc qdisc add dev eth0 parent 1:1 handle 10: tbf rate 22mbit buffer 68750 limit 70000
  • Test Object sizes - 1KB, 8KB, … 32MB
  • Test Protocols: HTTPS (TCP) and QUIC (UDP)
  • Number of runs and number of requests in a single connection

The test script outputs a CSV file of results for importing into other tools for data processing and visualization - such as Google Sheets, Excel or even a jupyter notebook. Also it’s able to post the result to a database (Clickhouse in our case), so we can query and visualize the results.

Sometimes a whole test combination takes a long time - the current standard test set with simulated 2G, 3G, LTE, WiFi and various object sizes repeated 10 times for each request may take several hours to run. Large object testing on a slow network takes most of the time, so sometimes we also need to run a limited test (e.g. testing LTE-like conditions only for a sanity check) for quick debugging.

Chart using Google Sheets:

The following comparison chart shows the total transfer time in msec for TCP vs QUIC for different network conditions. The QUIC protocol used here is a development version one.

A cost-effective and extensible testbed for transport protocol development

Debugging and performance analysis using of a smartphone

Mobile devices have become a crucial part of our day to day life, so testing the new transport protocol on mobile devices is critically important for mobile app performance. To facilitate that, we need to have a mobile test app which will proxy data over the new transport protocol under development. With this we have the ability to analyze protocol functionality and performance in mobile devices with different network conditions.

Adding a smartphone to the testbed mentioned above gives an advantage in terms of understanding real performance issues. The major smartphone operating systems, iOS and Android, have quite different networking stack. Adding a smartphone to testbed gives the ability to understand these operating system network stacks in depth which aides new protocol designs.

A cost-effective and extensible testbed for transport protocol development

The above figure shows the network block diagram of another similar lab testbed used for protocol testing where a smartphone is connected both wired and wirelessly. A Linux netem based traffic shaper sits in-between the client and server shaping the traffic. Various networking profiles are fed to the traffic shaper to mimic real world scenarios. The client can be either an Android or iOS based smartphone, the server is a vanilla web server serving static files. Client, server and traffic shaper are all connected to the Internet along with the private lab network for management purposes.

The above lab has mobile devices for both Android or iOS  installed with a test app built with a proprietary client proxy software for proxying data over the new transport protocol under development. The test app also has the ability to make HTTP requests over TCP for comparison purposes.

The Android or iOS test app can be used to issue multiple HTTPS requests of different object sizes sequentially and concurrently using TCP and QUIC as underlying transport protocol. Later, TTOTAL (total transfer time) of each HTTPS request is used to compare TCP and QUIC performance over different network conditions. One such comparison is shown below,

A cost-effective and extensible testbed for transport protocol development

The table above shows the total transfer time taken for TCP and QUIC requests over an LTE network profile fetching different objects with different concurrency levels using the test app. Here TCP goes over native OS network stack and QUIC goes over Cloudflare QUIC stack.

Debugging network performance issues is hard when it comes to mobile devices. By adding an actual smartphone into the testbed itself we have the ability to take packet captures at different layers. These are very critical in analyzing and understanding protocol performance.

It's easy and straightforward to capture packets and analyze them using the tcpdump tool on x86 boxes, but it's a challenge to capture packets on iOS and Android devices. On iOS device ‘rvictl’ lets us capture packets on an external interface. But ‘rvictl’ has some drawbacks such as timestamps being inaccurate. Since we are dealing with millisecond level events, timestamps need to be accurate to analyze the root cause of a problem.

We can capture packets on internal loopback interfaces on jailbroken iPhones and rooted Android devices. Jailbreaking a recent iOS device is nontrivial. We also need to make sure that autoupdate of any sort is disabled on such a phone otherwise it would disable the jailbreak and you have to start the whole process again. With a jailbroken phone we have root access to the device which lets us take packet captures as needed using tcpdump.

Packet captures taken using jailbroken iOS devices or rooted Android devices connected to the lab testbed help us analyze  performance bottlenecks and improve protocol performance.

iOS and Android devices different network stacks in their core operating systems. These packet captures also help us understand the network stack of these mobile devices, for example in iOS devices packets punted through loopback interface had a mysterious delay of 5 to 7ms.

Conclusion

Cloudflare is actively involved in helping to drive forward the QUIC and HTTP/3 standards by testing and optimizing these new protocols in simulated real world environments. By simulating a wide variety of networks we are working on our mission of Helping Build a Better Internet. For everyone, everywhere.

Would like to thank SangJo Lee, Hiren Panchasara, Lucas Pardue and Sreeni Tellakula for their contributions.

08:42

CoreAVI VkCoreGL SC1 Hits Compliance For Ushering Vulkan Into Safety Critical Systems [Phoronix]

Vulkan could soon be used indirectly on safety critical military and aerospace displays thanks to CoreAVI's VkCoreGL SC1...

Monday, 13 January

08:44

Saturday Morning Breakfast Cereal - Time Machine [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
In the new future, everyone is awful, but we're all DINOSAURS.


Today's News:

02:00

How to setup a DNS server with bind [Fedora Magazine]

The Domain Name System, or DNS, as it’s more commonly known, translates or converts domain names into the IP addresses associated with that domain. DNS is the reason you are able to find your favorite website by name instead of typing an IP address into your browser. This guide shows you how to configure a Master DNS system and one client.

Here are system details for the example used in this article:

dns01.fedora.local     (192.168.1.160 ) - Master DNS server
client.fedora.local    (192.168.1.136 ) - Client 

DNS server configuration

Install the bind packages using sudo:

$ sudo dnf install bind bind-utils -y

The /etc/named.conf configuration file is provided by the bind package to allow you to configure the DNS server.

Edit the /etc/named.conf file:

sudo vi /etc/named.conf

Look for the following line:

listen-on port 53 { 127.0.0.1; };

Add the IP address of your Master DNS server as follows:

listen-on port 53 { 127.0.0.1; 192.168.1.160; };

Look for the next line:

allow-query  { localhost; };

Add your local network range. The example system uses IP addresses in the 192.168.1.X range. This is specified as follows:

allow-query  { localhost; 192.168.1.0/24; };

Specify a forward and reverse zone. Zone files are simply text files that have the DNS information, such as IP addresses and host-names, on your system. The forward zone file makes it possible for the translation of a host-name to its IP address. The reverse zone file does the opposite. It allows a remote system to translate an IP address to the host name.

Look for the following line at the bottom of the /etc/named.conf file:

include "/etc/named.rfc1912.zones";

Here, you’ll specify the zone file information directly above that line as follows:

zone "dns01.fedora.local" IN {
type master;
file "forward.fedora.local";
allow-update { none; };
};

zone "1.168.192.in-addr.arpa" IN {
type master;
file "reverse.fedora.local";
allow-update { none; };
};

The forward.fedora.local and the file reverse.fedora.local are just the names of the zone files you will be creating. They can be called anything you like.

Save and exit.

Create the zone files

Create the forward and reverse zone files you specified in the /etc/named.conf file:

$ sudo vi /var/named/forward.fedora.local

Add the following lines:

$TTL 86400
@   IN  SOA     dns01.fedora.local. root.fedora.local. (
        2011071001  ;Serial
        3600        ;Refresh
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
)
@       IN  NS          dns01.fedora.local.
@       IN  A           192.168.1.160
dns01           IN  A   192.168.1.160
client          IN  A   192.168.1.136

Everything in bold is specific to your environment. Save the file and exit. Next, edit the reverse.fedora.local file:

$ sudo vi /var/named/reverse.fedora.local

Add the following lines:

$TTL 86400
@   IN  SOA     dns01.fedora.local. root.fedora.local. (
        2011071001  ;Serial
        3600        ;Refresh
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
)
@       IN  NS          dns01.fedora.local.
@       IN  PTR         fedora.local.
dns01           IN  A   192.168.1.160
client          IN  A   192.168.1.136
160     IN  PTR         dns01.fedora.local.
136     IN  PTR         client.fedora.local.

Everything in bold is also specific to your environment. Save the file and exit.

You’ll also need to configure SELinux and add the correct ownership for the configuration files.

sudo chgrp named -R /var/named
sudo chown -v root:named /etc/named.conf
sudo restorecon -rv /var/named
sudo restorecon /etc/named.conf

Configure the firewall:

sudo firewall-cmd --add-service=dns --perm
sudo firewall-cmd --reload

Check the configuration for any syntax errors

sudo named-checkconf /etc/named.conf

Your configuration is valid if no output or errors are returned.

Check the forward and reverse zone files.

$ sudo named-checkzone forward.fedora.local /var/named/forward.fedora.local

$ sudo named-checkzone reverse.fedora.local /var/named/reverse.fedora.local

You should see a response of OK:

zone forward.fedora.local/IN: loaded serial 2011071001
OK

zone reverse.fedora.local/IN: loaded serial 2011071001
OK

Enable and start the DNS service

$ sudo systemctl enable named
$ sudo systemctl start named

Configuring the resolv.conf file

Edit the /etc/resolv.conf file:

$ sudo vi /etc/resolv.conf

Look for your current name server line or lines. On the example system, a cable modem/router is serving as the name server and so it currently looks like this:

nameserver 192.168.1.1

This needs to be changed to the IP address of the Master DNS server:

nameserver 192.168.1.160

Save your changes and exit.

Unfortunately there is one caveat to be aware of. NetworkManager overwrites the /etc/resolv.conf file if the system is rebooted or networking gets restarted. This means you will lose all of the changes that you made.

To prevent this from happening, make /etc/resolv.conf immutable:

$ sudo chattr +i /etc/resolv.conf 

If you want to set it back and allow it to be overwritten again:

$ sudo chattr -i /etc/resolv.conf

Testing the DNS server

$ dig fedoramagazine.org
; <<>> DiG 9.11.13-RedHat-9.11.13-2.fc30 <<>> fedoramagazine.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8391
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ; COOKIE: c7350d07f8efaa1286c670ab5e13482d600f82274871195a (good)
 ;; QUESTION SECTION:
 ;fedoramagazine.org.        IN  A

;; ANSWER SECTION:
 fedoramagazine.org.    50  IN  A   35.197.52.145

;; AUTHORITY SECTION:
 fedoramagazine.org.    86150   IN  NS  ns05.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns02.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns04.fedoraproject.org.

;; ADDITIONAL SECTION:
 ns02.fedoraproject.org.    86150   IN  A   152.19.134.139
 ns04.fedoraproject.org.    86150   IN  A   209.132.181.17
 ns05.fedoraproject.org.    86150   IN  A   85.236.55.10
 ns02.fedoraproject.org.    86150   IN  AAAA    2610:28:3090:3001:dead:beef:cafe:fed5
 ns05.fedoraproject.org.    86150   IN  AAAA    2001:4178:2:1269:dead:beef:cafe:fed5

 ;; Query time: 830 msec
 ;; SERVER: 192.168.1.160#53(192.168.1.160)
 ;; WHEN: Mon Jan 06 08:46:05 CST 2020
 ;; MSG SIZE  rcvd: 266

There are a few things to look at to verify that the DNS server is working correctly. Obviously getting the results back are important, but that by itself doesn’t mean the DNS server is actually doing the work.

The QUERY, ANSWER, and AUTHORITY fields at the top should show non-zero as it in does in our example:

;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

And the SERVER field should have the IP address of your DNS server:

;; SERVER: 192.168.1.160#53(192.168.1.160)

In case this is the first time you’ve run the dig command, notice how it took 830 milliseconds for the query to complete:

;; Query time: 830 msec

If you run it again, the query will run much quicker:

$ dig fedoramagazine.org 
;; Query time: 0 msec
;; SERVER: 192.168.1.160#53(192.168.1.160)

Client configuration

The client configuration will be a lot simpler.

Install the bind utilities:

$ sudo dnf install bind-utils -y

Edit the /etc/resolv.conf file and configure the Master DNS as the only name server:

$ sudo vi /etc/resolv.conf

This is how it should look:

nameserver 192.168.1.160

Save your changes and exit. Then, make the /etc/resolv.conf file immutable to prevent it from be overwritten and going back to its default settings:

$ sudo chattr +i /etc/resolv.conf

Testing the client

You should get the same results as you did from the DNS server:

$ dig fedoramagazine.org
; <<>> DiG 9.11.13-RedHat-9.11.13-2.fc30 <<>> fedoramagazine.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8391
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 6

;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ; COOKIE: c7350d07f8efaa1286c670ab5e13482d600f82274871195a (good)
 ;; QUESTION SECTION:
 ;fedoramagazine.org.        IN  A

;; ANSWER SECTION:
 fedoramagazine.org.    50  IN  A   35.197.52.145

;; AUTHORITY SECTION:
 fedoramagazine.org.    86150   IN  NS  ns05.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns02.fedoraproject.org.
 fedoramagazine.org.    86150   IN  NS  ns04.fedoraproject.org.

;; ADDITIONAL SECTION:
 ns02.fedoraproject.org.    86150   IN  A   152.19.134.139
 ns04.fedoraproject.org.    86150   IN  A   209.132.181.17
 ns05.fedoraproject.org.    86150   IN  A   85.236.55.10
 ns02.fedoraproject.org.    86150   IN  AAAA    2610:28:3090:3001:dead:beef:cafe:fed5
 ns05.fedoraproject.org.    86150   IN  AAAA    2001:4178:2:1269:dead:beef:cafe:fed5

 ;; Query time: 1 msec
 ;; SERVER: 192.168.1.160#53(192.168.1.160)
 ;; WHEN: Mon Jan 06 08:46:05 CST 2020
 ;; MSG SIZE  rcvd: 266

Make sure the SERVER output has the IP Address of your DNS server.

Your DNS server is now ready to use and all requests from the client should be going through your DNS server now!

Sunday, 12 January

17:00

11:11

Helping mitigate the Citrix NetScaler CVE with Cloudflare Access [The Cloudflare Blog]

Helping mitigate the Citrix NetScaler CVE with Cloudflare Access

Yesterday, Citrix sent an updated notification to customers warning of a vulnerability in their Application Delivery Controller (ADC) product. If exploited, malicious attackers can bypass the login page of the administrator portal, without authentication, to perform arbitrary code execution.

No patch is available yet. Citrix expects to have a fix for certain versions on January 20 and others at the end of the month.

In the interim, Citrix has asked customers to attempt to mitigate the vulnerability. The recommended steps involve running a number of commands from an administrator command line interface.

The vulnerability relied on by attackers requires that they first be able to reach a login portal hosted by the ADC. Cloudflare can help teams secure that page and the resources protected by the ADC. Teams can place the login page, as well as the administration interface, behind Cloudflare Access’ identity proxy to prevent unauthenticated users from making requests to the portal.

Exploiting URL paths

Citrix ADC, also known as Citrix NetScaler, is an application delivery controller that provides Layer 3 through Layer 7 security for applications and APIs. Once deployed, administrators manage the installation of the ADC through a portal available at a dedicated URL on a hostname they control.

Users and administrators can reach the ADC interface over multiple protocols, but it appears that the vulnerability stems from HTTP paths that contain “/vpn/../vpns/” in the path via the VPN or AAA endpoints, from which a directory traversal exploit is possible.

The suggested mitigation steps ask customers to run commands which enforce new responder policies for the ADC interface. Those policies return 403s when certain paths are requested, blocking unauthenticated users from reaching directories that sit behind the authentication flow.

Protecting administrator portals with Cloudflare Access

To exploit this vulnerability, attackers must first be able to reach a login portal hosted by the ADC. As part of a defense-in-depth strategy, Cloudflare Access can prevent attackers from ever reaching the panel over HTTP or SSH.

Cloudflare Access, part of Cloudflare for Teams, protects internally managed resources by checking each request for identity and permission. When administrators secure an application behind Access, any request to the hostname of that application stops at Cloudflare’s network first. Once there, Cloudflare Access checks the request against the list of users who have permission to reach the application.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

To defend against attackers addressing IPs directly, Argo Tunnel can help secure the interface and force outbound requests through Cloudflare Access. With Argo Tunnel, and firewall rules preventing inbound traffic, no request can reach those IPs without first hitting Cloudflare, where Access can evaluate the request for authentication.

Administrators then build rules to decide who should authenticate to and reach the tools protected by Access. Whether those resources are virtual machines powering business operations or internal web applications, like Jira or iManage, when a user needs to connect, they pass through Cloudflare first.

When users need to connect to the tools behind Access, they are prompted to authenticate with their team’s SSO and, if valid, instantly connected to the application without being slowed down. Internally managed apps suddenly feel like SaaS products, and the login experience is seamless and familiar.

Behind the scenes, every request made to those internal tools hits Cloudflare first where we enforce identity-based policies. Access evaluates and logs every request to those apps for identity, giving administrators more visibility and security than a traditional VPN.

Helping mitigate the Citrix NetScaler CVE with Cloudflare Access

Cloudflare Access can also be bundled with the Cloudflare WAF, and WAF rules can be applied to guard against this as well. Adding Cloudflare Access, the Cloudflare WAF, and the mitigation commands from Citrix together provide layers of security while a patch is in development.

How to get started

We recommend that users of the Citrix ADC follow the mitigation steps recommended by Citrix. Cloudflare Access adds another layer of security by enforcing identity-based authentication for requests made over HTTP and SSH to the ADC interface. Together, these steps can help form a defense-in-depth strategy until a patch is released by Citrix.

To get started, Citrix ADC users can place their ADC interface and exposed endpoints behind a bastion host secured by Cloudflare Access. On that bastion host, administrators can use Cloudflare Argo Tunnel to open outbound-only connections to Cloudflare through which HTTP and SSH requests can be proxied.

Once deployed, users of the login portal can connect to the protected hostname. Cloudflare Access will prompt them to login with their identity provider and Cloudflare will validate the user against the rules created to control who can reach the interface. If authenticated and allowed, the user will be able to connect. No other requests will be able to reach the interface over HTTP or SSH without authentication.
The first five seats of Cloudflare Access are free. Teams can sign up here to get started.

08:36

Saturday Morning Breakfast Cereal - Ark [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
There's actually a whole 'nother page on the other side.


Today's News:

Hey! Me and Kelly are working on a project, and we started a little twitter account where we post Weird Stories from Space.

Saturday, 11 January

05:49

Saturday Morning Breakfast Cereal - Phantasm [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Credit to Kelly for this idea. The story has been changed to protect the innocent.


Today's News:

Friday, 10 January

09:33

Saturday Morning Breakfast Cereal - Rich [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Honestly, if you see a prophet and he's not super rich, you probably shouldn't trust him.


Today's News:

Thursday, 09 January

08:36

Saturday Morning Breakfast Cereal - Coffee Style [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Fact - the French only eat out of teacups with the French tricolor.


Today's News:

Wednesday, 08 January

10:08

Accelerating UDP packet transmission for QUIC [The Cloudflare Blog]

Accelerating UDP packet transmission for QUIC

This was originally published on Perf Planet's 2019 Web Performance Calendar.

QUIC, the new Internet transport protocol designed to accelerate HTTP traffic, is delivered on top of UDP datagrams, to ease deployment and avoid interference from network appliances that drop packets from unknown protocols. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.

But while a lot of work has gone into optimizing TCP implementations as much as possible over the years, including building offloading capabilities in both software (like in operating systems) and hardware (like in network interfaces), UDP hasn't received quite as much attention as TCP, which puts QUIC at a disadvantage. In this post we'll look at a few tricks that help mitigate this disadvantage for UDP, and by association QUIC.

For the purpose of this blog post we will only be concentrating on measuring throughput of QUIC connections, which, while necessary, is not enough to paint an accurate overall picture of the performance of the QUIC protocol (or its implementations) as a whole.

Test Environment

The client used in the measurements is h2load, built with QUIC and HTTP/3 support, while the server is NGINX, built with the open-source QUIC and HTTP/3 module provided by Cloudflare which is based on quiche (github.com/cloudflare/quiche), Cloudflare's own open-source implementation of QUIC and HTTP/3.

The client and server are run on the same host (my laptop) running Linux 5.3, so the numbers don’t necessarily reflect what one would see in a production environment over a real network, but it should still be interesting to see how much of an impact each of the techniques have.

Baseline

Currently the code that implements QUIC in NGINX uses the sendmsg() system call to send a single UDP packet at a time.

ssize_t sendmsg(int sockfd, const struct msghdr *msg,
    int flags);

The struct msghdr carries a struct iovec which can in turn carry multiple buffers. However, all of the buffers within a single iovec will be merged together into a single UDP datagram during transmission. The kernel will then take care of encapsulating the buffer in a UDP packet and sending it over the wire.

Accelerating UDP packet transmission for QUIC

The throughput of this particular implementation tops out at around 80-90 MB/s, as measured by h2load when performing 10 sequential requests for a 100 MB resource.

Accelerating UDP packet transmission for QUIC

sendmmsg()

Due to the fact that sendmsg() only sends a single UDP packet at a time, it needs to be invoked quite a lot in order to transmit all of the QUIC packets required to deliver the requested resources, as illustrated by the following bpftrace command:

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 904539

Each of those system calls causes an expensive context switch between the application and the kernel, thus impacting throughput.

But while sendmsg() only transmits a single UDP packet at a time for each invocation, its close cousin sendmmsg() (note the additional “m” in the name) is able to batch multiple packets per system call:

int sendmmsg(int sockfd, struct mmsghdr *msgvec,
    unsigned int vlen, int flags);

Multiple struct mmsghdr structures can be passed to the kernel as an array, each in turn carrying a single struct msghdr with its own struct iovec , with each element in the msgvec array representing a single UDP datagram.

Accelerating UDP packet transmission for QUIC

Let's see what happens when NGINX is updated to use sendmmsg() to send QUIC packets:

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 2437
@[tracepoint:syscalls:sys_enter_sendmmsg]: 15676

The number of system calls went down dramatically, which translates into an increase in throughput, though not quite as big as the decrease in syscalls:

Accelerating UDP packet transmission for QUIC

UDP segmentation offload

With sendmsg() as well as sendmmsg(), the application is responsible for separating each QUIC packet into its own buffer in order for the kernel to be able to transmit it. While the implementation in NGINX uses static buffers to implement this, so there is no overhead in allocating them, all of these buffers need to be traversed by the kernel during transmission, which can add significant overhead.

Linux supports a feature, Generic Segmentation Offload (GSO), which allows the application to pass a single "super buffer" to the kernel, which will then take care of segmenting it into smaller packets. The kernel will try to postpone the segmentation as much as possible to reduce the overhead of traversing outgoing buffers (some NICs even support hardware segmentation, but it was not tested in this experiment due to lack of capable hardware). Originally GSO was only supported for TCP, but support for UDP GSO was recently added as well, in Linux 4.18.

This feature can be controlled using the UDP_SEGMENT socket option:

setsockopt(fd, SOL_UDP, UDP_SEGMENT, &gso_size, sizeof(gso_size)))

As well as via ancillary data, to control segmentation for each sendmsg() call:

cm = CMSG_FIRSTHDR(&msg);
cm->cmsg_level = SOL_UDP;
cm->cmsg_type = UDP_SEGMENT;
cm->cmsg_len = CMSG_LEN(sizeof(uint16_t));
*((uint16_t *) CMSG_DATA(cm)) = gso_size;

Where gso_size is the size of each segment that form the "super buffer" passed to the kernel from the application. Once configured, the application can provide one contiguous large buffer containing a number of packets of gso_size length (as well as a final smaller packet), that will then be segmented by the kernel (or the NIC if hardware segmentation offloading is supported and enabled).

Accelerating UDP packet transmission for QUIC

Up to 64 segments can be batched with the UDP_SEGMENT option.

GSO with plain sendmsg() already delivers a significant improvement:

Accelerating UDP packet transmission for QUIC

And indeed the number of syscalls also went down significantly, compared to plain sendmsg() :

% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 18824

GSO can also be combined with sendmmsg() to deliver an even bigger improvement. The idea being that each struct msghdr can be segmented in the kernel by setting the UDP_SEGMENT option using ancillary data, allowing an application to pass multiple “super buffers”, each carrying up to 64 segments, to the kernel in a single system call.

The improvement is again fairly significant:

Accelerating UDP packet transmission for QUIC

Evolving from AFAP

Transmitting packets as fast as possible is easy to reason about, and there's much fun to be had in optimizing applications for that, but in practice this is not always the best strategy when optimizing protocols for the Internet

Bursty traffic is more likely to cause or be affected by congestion on any given network path, which will inevitably defeat any optimization implemented to increase transmission rates.

Packet pacing is an effective technique to squeeze out more performance from a network flow. The idea being that adding a short delay between each outgoing packet will smooth out bursty traffic and reduce the chance of congestion, and packet loss. For TCP this was originally implemented in Linux via the fq packet scheduler, and later by the BBR congestion control algorithm implementation, which implements its own pacer.

Accelerating UDP packet transmission for QUIC

Due to the nature of current QUIC implementations, which reside entirely in user-space, pacing of QUIC packets conflicts with any of the techniques explored in this post, because pacing each packet separately during transmission will prevent any batching on the application side, and in turn batching will prevent pacing, as batched packets will be transmitted as fast as possible once received by the kernel.

However Linux provides some facilities to offload the pacing to the kernel and give back some control to the application:

  • SO_MAX_PACING_RATE: an application can define this socket option to instruct the fq packet scheduler to pace outgoing packets up to the given rate. This works for UDP sockets as well, but it is yet to be seen how this can be integrated with QUIC, as a single UDP socket can be used for multiple QUIC connections (unlike TCP, where each connection has its own socket). In addition, this is not very flexible, and might not be ideal when implementing the BBR pacer.
  • SO_TXTIME / SCM_TXTIME: an application can use these options to schedule transmission of specific packets at specific times, essentially instructing fq to delay packets until the provided timestamp is reached. This gives the application a lot more control, and can be easily integrated into sendmsg() as well as sendmmsg(). But it does not yet support specifying different times for each packet when GSO is used, as there is no way to define multiple timestamps for packets that need to be segmented (each segmented packet essentially ends up being sent at the same time anyway).

While the performance gains achieved by using the techniques illustrated here are fairly significant, there are still open questions around how any of this will work with pacing, so more experimentation is required.

09:29

Saturday Morning Breakfast Cereal - Four-legs [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
What if global warming is a conspiracy by fish to grow their empire?


Today's News:

08:00

Prototyping optimizations with Cloudflare Workers and WebPageTest [The Cloudflare Blog]

Prototyping optimizations with Cloudflare Workers and WebPageTest

This article was originally published as part of  Perf Planet's 2019 Web Performance Calendar.

Have you ever wanted to quickly test a new performance idea, or see if the latest performance wisdom is beneficial to your site? As web performance appears to be a stochastic process, it is really important to be able to iterate quickly and review the effects of different experiments. The challenge is to be able to arbitrarily change requests and responses without the overhead of setting up another internet facing server. This can be straightforward to implement by combining two of my favourite technologies : WebPageTest and Cloudflare Workers. Pat Meenan sums this up with the following slide from a recent getting the most of WebPageTest presentation:

Prototyping optimizations with Cloudflare Workers and WebPageTest

So what is Cloudflare Workers and why is it ideally suited to easy prototyping of optimizations?

Cloudflare Workers

From the documentation :

Cloudflare Workers provides a lightweight JavaScript execution environment that allows developers to augment existing applications or create entirely new ones without configuring or maintaining infrastructure.A Cloudflare Worker is a programmable proxy which brings the simplicity and flexibility of the Service Workers event-based fetch API from the browser to the edge. This allows a worker to intercept and modify requests and responses.

Prototyping optimizations with Cloudflare Workers and WebPageTest

With the Service Worker API you can add an EventListener to any fetch event that is routed through the worker script and modify the request to come from a different origin.

Cloudflare Workers also provides a streaming HTMLRewriter to enable on the fly modification of HTML as it passes through the worker. The streaming nature of this parser ensures latency is minimised as the entire HTML document does not have to be buffered before rewriting can happen.

Setting up a worker

It is really quick and easy to sign up for a free subdomain at workers.dev which provides you with 100,000 free requests per day. There is a quick-start guide available here.To be able to run the examples in this post you will need to install Wrangler, the CLI tool for deploying workers. Once Wrangler is installed run the following command to download the example worker project:    

wrangler generate wpt-proxy https://github.com/xtuc/WebPageTest-proxy

You will then need to update the wrangler.toml with your account_id, which can be found in the dashboard in the right sidebar. Then configure an API key with the command:

wrangler config

Finally, you can publish the worker with:  

wrangler publish

At this the point, the worker will be active at

https://wpt-proxy.<your-subdomain>.workers.dev.

WebPageTest OverrideHost  

Now that your worker is configured, the next step is to configure WebPageTest to redirect requests through the worker. WebPageTest has a feature where it can re-point arbitrary origins to a different domain. To access the feature in WebPageTest, you need to use the WebPageTest scripting language "overrideHost" command, as shown:

Prototyping optimizations with Cloudflare Workers and WebPageTest

This example will redirect all network requests to www.bbc.co.uk to wpt-proxy.prf.workers.dev instead. WebPageTest also adds an x-host header to each redirected request so that the destination can determine for which host the request was originally intended:    

x-host: www.bbc.co.uk

The script can process multiple overrideHost commands to override multiple different origins. If HTTPS is used, WebPageTest can use HTTP/2 and benefit from connection coalescing:  

overrideHost www.bbc.co.uk wpt-proxy.prf.workers.dev    
overrideHost nav.files.bbci.co.uk wpt-proxy.prf.workers.dev
navigate https://www.bbc.co.uk

 It also supports wildcards:  

overrideHost *bbc.co.uk wpt-proxy.prf.workers.dev    
navigate https://www.bbc.co.uk

There are a few special strings that can be used in a script when bulk testing, so a single script can be re-used across multiple URLs:

  • %URL% - Replaces with the URL of the current test
  • %HOST% - Replaces with the hostname of the URL of the current test
  • %HOSTR% - Replaces with the hostname of the final URL in case the test URL does a redirect.

A more generic script would look like this:    

overrideHost %HOSTR% wpt-proxy.prf.workers.dev    
navigate %URL% 

Basic worker

In the base example below, the worker listens for the fetch event, looks for the x-host header that WebPageTest has set and responds by fetching the content from the orginal url:

/* 
* Handle all requests. 
* Proxy requests with an x-host header and return 403
* for everything else
*/

addEventListener("fetch", event => {    
   const host = event.request.headers.get('x-host');        
   if (host) {          
      const url = new URL(event.request.url);          
      const originUrl = url.protocol + '//' + host + url.pathname + url.search;             
      let init = {             
         method: event.request.method,             
         redirect: "manual",             
         headers: [...event.request.headers]          
      };          
      event.respondWith(fetch(originUrl, init));        
   } 
   else {           
     const response = new Response('x-Host headers missing', {status: 403});                
     event.respondWith(response);        
   }    
});

The source code can be found here and instructions to download and deploy this worker are described in the earlier section.

So what happens if we point all the domains on the BBC website through this worker, using the following config:  

overrideHost    *bbci.co.uk wpt.prf.workers.dev    
overrideHost    *bbc.co.uk  wpt.prf.workers.dev    
navigate    https://www.bbc.co.uk

configured to a 3G Fast setting from a UK test location.

Prototyping optimizations with Cloudflare Workers and WebPageTest
Comparison of BBC website if when using a single connection. 

Before After
Prototyping optimizations with Cloudflare Workers and WebPageTest Prototyping optimizations with Cloudflare Workers and WebPageTest

The potential performance improvement of loading a page over a single connection, eliminating the additional DNS lookup, TCP connection and TLS handshakes, can be seen  by comparing the filmstrips and waterfalls. There are several reasons why you may not want or be able to move everything to a single domain, but at least it is now easy to see what the performance difference would be.  

HTMLRewriter

With the HTMLRewriter, it is possible to change the HTML response as it passes through the worker. A jQuery-like syntax provides CSS-selector matching and a standard set of DOM mutation methods. For instance you could rewrite your page to measure the effects of different preload/prefetch strategies, review the performance savings of removing or using different third-party scripts, or you could stock-take the HEAD of your document. One piece of performance advice is to self-host some third-party scripts. This example script invokes the HTMLRewriter to listen for a script tag with a src attribute. If the script is from a proxiable domain the src is rewritten to be first-party, with a specific path prefix.

async function rewritePage(request) {  
  const response = await fetch(request);    
    return new HTMLRewriter()      
      .on("script[src]", {        
        element: el => {          
          let src = el.getAttribute("src");          
          if (PROXIED_URL_PREFIXES_RE.test(src)) {
            el.setAttribute("src", createProxiedScriptUrl(src));
          }           
        }    
    })    
    .transform(response);
}

Subsequently, when the browser makes a request with the specific prefix, the worker fetches the asset from the original URL. This example can be downloaded with this command:    

wrangler generate test https://github.com/xtuc/rewrite-3d-party.git

Request Mangling

As well as rewriting content, it is also possible to change or delay a request. Below is an example of how to randomly add a delay of a second to a request:

addEventListener("fetch", event => {    
    const host = event.request.headers.get('x-host');    
    if (host) { 
//....     
    // Add the delay if necessary     
    if (Math.random() * 100 < DELAY_PERCENT) {       
      await new Promise(resolve => setTimeout(resolve, DELAY_MS));     
    }    
    event.respondWith(fetch(originUrl, init));
//...
}

HTTP/2 prioritization

What if you want to see what the effect of changing the HTTP/2 prioritization of assets would make to your website? Cloudflare Workers provide custom http2 prioritization schemes that can be applied by setting a custom header on the response. The cf-priority header is defined as <priority>/<concurrency> so adding:    

response.headers.set('cf-priority', “30/0”);    

would set the priority of that response to 30 with a concurrency of 0 for the given response. Similarly, “30/1” would set concurrency to 1 and “30/n” would set concurrency to n. With this flexibility, you can prioritize the bytes that are important for your website or run a bulk test to prove that your new  prioritization scheme is better than any of the existing browser implementations.

Summary

A major barrier to understanding and innovation, is the amount of time is takes to get feedback. Having a quick and easy framework, to try out a new idea and comprehend the impact, is key. I hope this post has convinced you that combining WebPageTest and Cloudflare Workers is an easy solution to this problem and is indeed magic

02:00

How to setup multiple monitors in sway [Fedora Magazine]

Sway is a tiling Wayland compositor which has mostly the same features, look and workflow as the i3 X11 window manager. Because Sway uses Wayland instead of X11, the tools to setup X11 don’t always work in sway. This includes tools like xrandr, which are used in X11 window managers or desktops to setup monitors. This is why monitors have to be setup by editing the sway config file, and that’s what this article is about.

Getting your monitor ID’s

First, you have to get the names sway uses to refer to your monitors. You can do this by running:

$ swaymsg -t get_outputs

You will get information about all of your monitors, every monitor separated by an empty line.

You have to look for the first line of every section, and for what’s after “Output”. For example, when you see a line like “Output DVI-D-1 ‘Philips Consumer Electronics Company’”, the output ID is “DVI-D-1”. Note these ID’s and which physical monitors they belong to.

Editing the config file

If you haven’t edited the Sway config file before, you have to copy it to your home directory by running this command:

cp -r /etc/sway/config ~/.config/sway/config

Now the default config file is located in ~/.config/sway and called “config”. You can edit it using any text editor.

Now you have to do a little bit of math. Imagine a grid with the origin in the top left corner. The units of the X and Y coordinates are pixels. The Y axis is inverted. This means that if you, for example, start at the origin and you move 100 pixels to the right and 80 pixels down, your coordinates will be (100, 80).

You have to calculate where your displays are going to end up on this grid. The locations of the displays are specified with the top left pixel. For example, if we want to have a monitor with name HDMI1 and a resolution of 1920×1080, and to the right of it a laptop monitor with name eDP1 and a resolution of 1600×900, you have to type this in your config file:

output HDMI1 pos 0 0
output eDP1 pos 1920 0

You can also specify the resolutions manually by using the res option: 

output HDMI1 pos 0 0 res 1920x1080
output eDP1 pos 1920 0 res 1600x900

Binding workspaces to monitors

Using sway with multiple monitors can be a little bit tricky with workspace management. Luckily, you can bind workspaces to a specific monitor, so you can easily switch to that monitor and use your displays more efficiently. This can simply be done by the workspace command in your config file. For example, if you want to bind workspace 1 and 2 to monitor DVI-D-1 and workspace 8 and 9 to monitor HDMI-A-1, you can do that by using:

workspace 1 output DVI-D-1
workspace 2 output DVI-D-1
workspace 8 output HDMI-A-1
workspace 9 output HDMI-A-1

That’s it! These are the basics of multi monitor setup in sway. A more detailed guide can be found at https://github.com/swaywm/sway/wiki#Multihead

Tuesday, 07 January

08:59

Saturday Morning Breakfast Cereal - Ratio [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
People who have negative percents are called 'optimists'.


Today's News:

Last chance to submit a proposal for BAHFest Houston or BAHFest London!

07:00

Introducing Cloudflare for Teams [The Cloudflare Blog]

Introducing Cloudflare for Teams

Ten years ago, when Cloudflare was created, the Internet was a place that people visited. People still talked about ‘surfing the web’ and the iPhone was less than two years old, but on July 4, 2009 large scale DDoS attacks were launched against websites in the US and South Korea.

Those attacks highlighted how fragile the Internet was and how all of us were becoming dependent on access to the web as part of our daily lives.

Fast forward ten years and the speed, reliability and safety of the Internet is paramount as our private and work lives depend on it.

We started Cloudflare to solve one half of every IT organization's challenge: how do you ensure the resources and infrastructure that you expose to the Internet are safe from attack, fast, and reliable. We saw that the world was moving away from hardware and software to solve these problems and instead wanted a scalable service that would work around the world.

To deliver that, we built one of the world's largest networks. Today our network spans more than 200 cities worldwide and is within milliseconds of nearly everyone connected to the Internet. We have built the capacity to stand up to nation-state scale cyberattacks and a threat intelligence system powered by the immense amount of Internet traffic that we see.

Introducing Cloudflare for Teams

Today we're expanding Cloudflare's product offerings to solve the other half of every IT organization's challenge: ensuring the people and teams within an organization can access the tools they need to do their job and are safe from malware and other online threats.

The speed, reliability, and protection we’ve brought to public infrastructure is extended today to everything your team does on the Internet.

In addition to protecting an organization's infrastructure, IT organizations are charged with ensuring that employees of an organization can access the tools they need safely. Traditionally, these problems would be solved by hardware products like VPNs and Firewalls. VPNs let authorized users access the tools they needed and Firewalls kept malware out.

Castle and Moat

Introducing Cloudflare for Teams

The dominant model was the idea of a castle and a moat. You put all your valuable assets inside the castle. Your Firewall created the moat around the castle to keep anything malicious out. When you needed to let someone in, a VPN acted as the drawbridge over the moat.

This is still the model most businesses use today, but it's showing its age. The first challenge is that if an attacker is able to find its way over the moat and into the castle then it can cause significant damage. Unfortunately, few weeks go by without reading a news story about how an organization had significant data compromised because an employee fell for a phishing email, or a contractor was compromised, or someone was able to sneak into an office and plug in a rogue device.

The second challenge of the model is the rise of cloud and SaaS. Increasingly an organization's resources aren't in the just one castle anymore, but instead in different public cloud and SaaS vendors.

Services like Box, for instance, provide better storage and collaboration tools than most organizations could ever hope to build and manage themselves. But there's literally nowhere you can ship a hardware box to Box in order to build your own moat around their SaaS castle. Box provides some great security tools themselves, but they are different from the tools provided by every other SaaS and public cloud vendor. Where IT organizations used to try to have a single pane of glass with a complex mess of hardware to see who was getting stopped by their moats and who was crossing their drawbridges, SaaS and cloud make that visibility increasingly difficult.

The third challenge to the traditional castle and moat strategy of IT is the rise of mobile. Where once upon a time your employees would all show up to work in your castle, now people are working around the world. Requiring everyone to login to a limited number of central VPNs becomes obviously absurd when you picture it as villagers having to sprint back from wherever they are across a drawbridge whenever they want to get work done. It's no wonder VPN support is one of the top IT organization tickets and likely always will be for organizations that maintain a castle and moat approach.

Introducing Cloudflare for Teams

But it's worse than that. Mobile has also introduced a culture where employees bring their own devices to work. Or, even if on a company-managed device, work from the road or home — beyond the protected walls of the castle and without the security provided by a moat.

If you'd looked at how we managed our own IT systems at Cloudflare four years ago, you'd have seen us following this same model. We used firewalls to keep threats out and required every employee to login through our VPN to get their work done. Personally, as someone who travels extensively for my job, it was especially painful.

Regularly, someone would send me a link to an internal wiki article asking for my input. I'd almost certainly be working from my mobile phone in the back of a cab running between meetings. I'd try and access the link and be prompted to login to our VPN in San Francisco. That's when the frustration would start.

Corporate mobile VPN clients, in my experience, all seem to be powered by some 100-sided die that only will allow you to connect if the number of miles you are from your home office is less than 25 times whatever number is rolled. Much frustration, and several IT tickets later, with a little luck I may be able to connect. And, even then, the experience was horribly slow and unreliable.

When we audited our own system, we found that the frustration with the process had caused multiple teams to create work arounds that were, effectively, unauthorized drawbridges over our carefully constructed moat. And, as we increasingly adopted SaaS tools like Salesforce and Workday, we lost much visibility into how these tools were being used.

Around the same time we were realizing the traditional approach to IT security was untenable for an organization like Cloudflare, Google published their paper titled "BeyondCorp: A New Approach to Enterprise Security." The core idea was that a company's intranet should be no more trusted than the Internet. And, rather than the perimeter being enforced by a singular moat, instead each application and data source should authenticate the individual and device each time it is accessed.

The BeyondCorp idea, which has come to be known as a ZeroTrust model for IT security, was influential for how we thought about our own systems. Powerfully, because Cloudflare had a flexible global network, we were able to use it both to enforce policies as our team accessed tools as well as to protect ourselves from malware as we did our jobs.

Cloudflare for Teams

Today, we're excited to announce Cloudflare for Teams™: the suite of tools we built to protect ourselves, now available to help any IT organization, from the smallest to the largest.

Cloudflare for Teams is built around two complementary products: Access and Gateway. Cloudflare Access™ is the modern VPN — a way to ensure your team members get fast access to the resources they need to do their job while keeping threats out. Cloudflare Gateway™ is the modern Next Generation Firewall — a way to ensure that your team members are protected from malware and follow your organization's policies wherever they go online.

Powerfully, both Cloudflare Access and Cloudflare Gateway are built atop the existing Cloudflare network. That means they are fast, reliable, scalable to the largest organizations, DDoS resistant, and located everywhere your team members are today and wherever they may travel. Have a senior executive going on a photo safari to see giraffes in Kenya, gorillas in Rwanda, and lemurs in Madagascar — don't worry, we have Cloudflare data centers in all those countries (and many more) and they all support Cloudflare for Teams.

Introducing Cloudflare for Teams

All Cloudflare for Teams products are informed by the threat intelligence we see across all of Cloudflare's products. We see such a large diversity of Internet traffic that we often see new threats and malware before anyone else. We've supplemented our own proprietary data with additional data sources from leading security vendors, ensuring Cloudflare for Teams provides a broad set of protections against malware and other online threats.

Moreover, because Cloudflare for Teams runs atop the same network we built for our infrastructure protection products, we can deliver them very efficiently. That means that we can offer these products to our customers at extremely competitive prices. Our goal is to make the return on investment (ROI) for all Cloudflare for Teams customers nothing short of a no brainer. If you’re considering another solution, contact us before you decide.

Both Cloudflare Access and Cloudflare Gateway also build off products we've launched and battle tested already. For example, Gateway builds, in part, off our 1.1.1.1 Public DNS resolver. Today, more than 40 million people trust 1.1.1.1 as the fastest public DNS resolver globally. By adding malware scanning, we were able to create our entry-level Cloudflare Gateway product.

Cloudflare Access and Cloudflare Gateway build off our WARP and WARP+ products. We intentionally built a consumer mobile VPN service because we knew it would be hard. The millions of WARP and WARP+ users who have put the product through its paces have ensured that it's ready for the enterprise. That we have 4.5 stars across more than 200,000 ratings, just on iOS, is a testament of how reliable the underlying WARP and WARP+ engines have become. Compare that with the ratings of any corporate mobile VPN client, which are unsurprisingly abysmal.

We’ve partnered with some incredible organizations to create the ecosystem around Cloudflare for Teams. These include endpoint security solutions including VMWare Carbon Black, Malwarebytes, and Tanium. SEIM and analytics solutions including Datadog, Sumo Logic, and Splunk. Identity platforms including Okta, OneLogin, and Ping Identity. Feedback from these partners and more is at the end of this post.

If you’re curious about more of the technical details about Cloudflare for Teams, I encourage you to read Sam Rhea’s post.

Serving Everyone

Cloudflare has always believed in the power of serving everyone. That’s why we’ve offered a free version of Cloudflare for Infrastructure since we launched in 2010. That belief doesn’t change with our launch of Cloudflare for Teams. For both Cloudflare Access and Cloudflare Gateway, there will be free versions to protect individuals, home networks, and small businesses. We remember what it was like to be a startup and believe that everyone deserves to be safe online, regardless of their budget.

With both Cloudflare Access and Gateway, the products are segmented along a Good, Better, Best framework. That breaks out into Access Basic, Access Pro, and Access Enterprise. You can see the features available with each tier in the table below, including Access Enterprise features that will roll out over the coming months.

Introducing Cloudflare for Teams

We wanted a similar Good, Better, Best framework for Cloudflare Gateway. Gateway Basic can be provisioned in minutes through a simple change to your network’s recursive DNS settings. Once in place, network administrators can set rules on what domains should be allowed and filtered on the network. Cloudflare Gateway is informed both by the malware data gathered from our global sensor network as well as a rich corpus of domain categorization, allowing network operators to set whatever policy makes sense for them. Gateway Basic leverages the speed of 1.1.1.1 with granular network controls.

Gateway Pro, which we’re announcing today and you can sign up to beta test as its features roll out over the coming months, extends the DNS-provisioned protection to a full proxy. Gateway Pro can be provisioned via the WARP client — which we are extending beyond iOS and Android mobile devices to also support Windows, MacOS, and Linux — or network policies including MDM-provisioned proxy settings or GRE tunnels from office routers. This allows a network operator to filter on policies not merely by the domain but by the specific URL.

Introducing Cloudflare for Teams

Building the Best-in-Class Network Gateway

While Gateway Basic (provisioned via DNS) and Gateway Pro (provisioned as a proxy) made sense, we wanted to imagine what the best-in-class network gateway would be for Enterprises that valued the highest level of performance and security. As we talked to these organizations we heard an ever-present concern: just surfing the Internet created risk of unauthorized code compromising devices. With every page that every user visited, third party code (JavaScript, etc.) was being downloaded and executed on their devices.

The solution, they suggested, was to isolate the local browser from third party code and have websites render in the network. This technology is known as browser isolation. And, in theory, it’s a great idea. Unfortunately, in practice with current technology, it doesn’t perform well. The most common way the browser isolation technology works is to render the page on a server and then push a bitmap of the page down to the browser. This is known as pixel pushing. The challenge is that can be slow, bandwidth intensive, and it breaks many sophisticated web applications.

We were hopeful that we could solve some of these problems by moving the rendering of the pages to Cloudflare’s network, which would be closer to end users. So we talked with many of the leading browser isolation companies about potentially partnering. Unfortunately, as we experimented with their technologies, even with our vast network, we couldn’t overcome the sluggish feel that plagues existing browser isolation solutions.

Enter S2 Systems

Introducing Cloudflare for Teams

That’s when we were introduced to S2 Systems. I clearly remember first trying the S2 demo because my first reaction was: “This can’t be working correctly, it’s too fast.” The S2 team had taken a different approach to browser isolation. Rather than trying to push down a bitmap of what the screen looked like, instead they pushed down the vectors to draw what’s on the screen. The result was an experience that was typically at least as fast as browsing locally and without broken pages.

The best, albeit imperfect, analogy I’ve come up with to describe the difference between S2’s technology and other browser isolation companies is the difference between WindowsXP and MacOS X when they were both launched in 2001. WindowsXP’s original graphics were based on bitmapped images. MacOS X were based on vectors. Remember the magic of watching an application “genie” in and out the MacOS X doc? Check it out in a video from the launch…

At the time watching a window slide in and out of the dock seemed like magic compared with what you could do with bitmapped user interfaces. You can hear the awe in the reaction from the audience. That awe that we’ve all gotten used to in UIs today comes from the power of vector images. And, if you’ve been underwhelmed by the pixel-pushed bitmaps of existing browser isolation technologies, just wait until you see what is possible with S2’s technology.

Introducing Cloudflare for Teams

We were so impressed with the team and the technology that we acquired the company. We will be integrating the S2 technology into Cloudflare Gateway Enterprise. The browser isolation technology will run across Cloudflare’s entire global network, bringing it within milliseconds of virtually every Internet user. You can learn more about this approach in Darren Remington's blog post.

Once the rollout is complete in the second half of 2020 we expect we will be able to offer the first full browser isolation technology that doesn’t force you to sacrifice performance. In the meantime, if you’d like a demo of the S2 technology in action, let us know.

The Promise of a Faster Internet for Everyone

Cloudflare’s mission is to help build a better Internet. With Cloudflare for Teams, we’ve extended that network to protect the people and organizations that use the Internet to do their jobs. We’re excited to help a more modern, mobile, and cloud-enabled Internet be safer and faster than it ever was with traditional hardware appliances.

But the same technology we’re deploying now to improve enterprise security holds further promise. The most interesting Internet applications keep getting more complicated and, in turn, requiring more bandwidth and processing power to use.

For those of us fortunate enough to be able to afford the latest iPhone, we continue to reap the benefits of an increasingly powerful set of Internet-enabled tools. But try and use the Internet on a mobile phone from a few generations back, and you can see how quickly the latest Internet applications leaves legacy devices behind. That’s a problem if we want to bring the next 4 billion Internet users online.

We need a paradigm shift if the sophistication of applications and complexity of interfaces continues to keep pace with the latest generation of devices. To make the best of the Internet available to everyone, we may need to shift the work of the Internet off the end devices we all carry around in our pockets and let the network — where power, bandwidth, and CPU are relatively plentiful — carry more of the load.

That’s the long term promise of what S2’s technology combined with Cloudflare’s network may someday power. If we can make it so a less expensive device can run the latest Internet applications — using less battery, bandwidth, and CPU than ever before possible — then we can make the Internet more affordable and accessible for everyone.

We started with Cloudflare for Infrastructure. Today we’re announcing Cloudflare for Teams. But our ambition is nothing short of Cloudflare for Everyone.

Early Feedback on Cloudflare for Teams from Customers and Partners

Introducing Cloudflare for Teams

"Cloudflare Access has enabled Ziff Media Group to seamlessly and securely deliver our suite of internal tools to employees around the world on any device, without the need for complicated network configurations,” said Josh Butts, SVP Product & Technology, Ziff Media Group.


Introducing Cloudflare for Teams

“VPNs are frustrating and lead to countless wasted cycles for employees and the IT staff supporting them,” said Amod Malviya, Cofounder and CTO, Udaan. “Furthermore, conventional VPNs can lull people into a false sense of security. With Cloudflare Access, we have a far more reliable, intuitive, secure solution that operates on a per user, per access basis. I think of it as Authentication 2.0 — even 3.0”


Introducing Cloudflare for Teams

“Roman makes healthcare accessible and convenient,” said Ricky Lindenhovius, Engineering Director, Roman Health. “Part of that mission includes connecting patients to physicians, and Cloudflare helps Roman securely and conveniently connect doctors to internally managed tools. With Cloudflare, Roman can evaluate every request made to internal applications for permission and identity, while also improving speed and user experience.”


Introducing Cloudflare for Teams

“We’re excited to partner with Cloudflare to provide our customers an innovative approach to enterprise security that combines the benefits of endpoint protection and network security," said Tom Barsi, VP Business Development, VMware. "VMware Carbon Black is a leading endpoint protection platform (EPP) and offers visibility and control of laptops, servers, virtual machines, and cloud infrastructure at scale. In partnering with Cloudflare, customers will have the ability to use VMware Carbon Black’s device health as a signal in enforcing granular authentication to a team’s internally managed application via Access, Cloudflare’s Zero Trust solution. Our joint solution combines the benefits of endpoint protection and a zero trust authentication solution to keep teams working on the Internet more secure."


Introducing Cloudflare for Teams

“Rackspace is a leading global technology services company accelerating the value of the cloud during every phase of our customers’ digital transformation,” said Lisa McLin, vice president of alliances and channel chief at Rackspace. “Our partnership with Cloudflare enables us to deliver cutting edge networking performance to our customers and helps them leverage a software defined networking architecture in their journey to the cloud.”


Introducing Cloudflare for Teams

“Employees are increasingly working outside of the traditional corporate headquarters. Distributed and remote users need to connect to the Internet, but today’s security solutions often require they backhaul those connections through headquarters to have the same level of security,” said Michael Kenney, head of strategy and business development for Ingram Micro Cloud. “We’re excited to work with Cloudflare whose global network helps teams of any size reach internally managed applications and securely use the Internet, protecting the data, devices, and team members that power a business.”


Introducing Cloudflare for Teams

"At Okta, we’re on a mission to enable any organization to securely use any technology. As a leading provider of identity for the enterprise, Okta helps organizations remove the friction of managing their corporate identity for every connection and request that their users make to applications. We’re excited about our partnership with Cloudflare and bringing seamless authentication and connection to teams of any size,” said Chuck Fontana, VP, Corporate & Business Development, Okta.


Introducing Cloudflare for Teams

"Organizations need one unified place to see, secure, and manage their endpoints,” said Matt Hastings, Senior Director of Product Management at Tanium. “We are excited to partner with Cloudflare to help teams secure their data, off-network devices, and applications. Tanium’s platform provides customers with a risk-based approach to operations and security with instant visibility and control into their endpoints. Cloudflare helps extend that protection by incorporating device data to enforce security for every connection made to protected resources.”


Introducing Cloudflare for Teams

“OneLogin is happy to partner with Cloudflare to advance security teams' identity control in any environment, whether on-premise or in the cloud, without compromising user performance," said Gary Gwin, Senior Director of Product at OneLogin. "OneLogin’s identity and access management platform securely connects people and technology for every user, every app, and every device. The OneLogin and Cloudflare for Teams integration provides a comprehensive identity and network control solution for teams of all sizes.”


Introducing Cloudflare for Teams

“Ping Identity helps enterprises improve security and user experience across their digital businesses,” said Loren Russon, Vice President of Product Management, Ping Identity. “Cloudflare for Teams integrates with Ping Identity to provide a comprehensive identity and network control solution to teams of any size, and ensures that only the right people get the right access to applications, seamlessly and securely."


Introducing Cloudflare for Teams

"Our customers increasingly leverage deep observability data to address both operational and security use cases, which is why we launched Datadog Security Monitoring," said Marc Tremsal, Director of Product Management at Datadog. "Our integration with Cloudflare already provides our customers with visibility into their web and DNS traffic; we're excited to work together as Cloudflare for Teams expands this visibility to corporate environments."


Introducing Cloudflare for Teams

“As more companies support employees who work on corporate applications from outside of the office, it is vital that they understand each request users are making. They need real-time insights and intelligence to react to incidents and audit secure connections," said John Coyle, VP of Business Development, Sumo Logic. "With our partnership with Cloudflare, customers can now log every request made to internal applications and automatically push them directly to Sumo Logic for retention and analysis."


Introducing Cloudflare for Teams

“Cloudgenix is excited to partner with Cloudflare to provide an end-to-end security solution from the branch to the cloud. As enterprises move off of expensive legacy MPLS networks and adopt branch to internet breakout policies, the CloudGenix CloudBlade platform and Cloudflare for Teams together can make this transition seamless and secure. We’re looking forward to Cloudflare’s roadmap with this announcement and partnership opportunities in the near term.” said Aaron Edwards, Field CTO, Cloudgenix.


Introducing Cloudflare for Teams

“In the face of limited cybersecurity resources, organizations are looking for highly automated solutions that work together to reduce the likelihood and impact of today’s cyber risks,” said Akshay Bhargava, Chief Product Officer, Malwarebytes. “With Malwarebytes and Cloudflare together, organizations are deploying more than twenty layers of security defense-in-depth. Using just two solutions, teams can secure their entire enterprise from device, to the network, to their internal and external applications.”


Introducing Cloudflare for Teams

"Organizations' sensitive data is vulnerable in-transit over the Internet and when it's stored at its destination in public cloud, SaaS applications and endpoints,” said Pravin Kothari, CEO of CipherCloud. “CipherCloud is excited to partner with Cloudflare to secure data in all stages, wherever it goes. Cloudflare’s global network secures data in-transit without slowing down performance. CipherCloud CASB+ provides a powerful cloud security platform with end-to-end data protection and adaptive controls for cloud environments, SaaS applications and BYOD endpoints. Working together, teams can rely on integrated Cloudflare and CipherCloud solution to keep data always protected without compromising user experience.”


07:00

Security on the Internet with Cloudflare for Teams [The Cloudflare Blog]

Security on the Internet with Cloudflare for Teams
Security on the Internet with Cloudflare for Teams

Your experience using the Internet has continued to improve over time. It’s gotten faster, safer, and more reliable. However, you probably have to use a different, worse, equivalent of it when you do your work. While the Internet kept getting better, businesses and their employees were stuck using their own private networks.

In those networks, teams hosted their own applications, stored their own data, and protected all of it by building a castle and moat around that private world. This model hid internally managed resources behind VPN appliances and on-premise firewall hardware. The experience was awful, for users and administrators alike. While the rest of the Internet became more performant and more reliable, business users were stuck in an alternate universe.

That legacy approach was less secure and slower than teams wanted, but the corporate perimeter mostly worked for a time. However, that began to fall apart with the rise of cloud-delivered applications. Businesses migrated to SaaS versions of software that previously lived in that castle and behind that moat. Users needed to connect to the public Internet to do their jobs, and attackers made the Internet unsafe in sophisticated, unpredictable ways - which opened up every business to  a new world of never-ending risks.

How did enterprise security respond? By trying to solve a new problem with a legacy solution, and forcing the Internet into equipment that was only designed for private, corporate networks. Instead of benefitting from the speed and availability of SaaS applications, users had to backhaul Internet-bound traffic through the same legacy boxes that made their private network miserable.

Teams then watched as their bandwidth bills increased. More traffic to the Internet from branch offices forced more traffic over expensive, dedicated links. Administrators now had to manage a private network and the connections to the entire Internet for their users, all with the same hardware. More traffic required more hardware and the cycle became unsustainable.

Cloudflare’s first wave of products secured and improved the speed of those sites by letting customers, from free users to some of the largest properties on the Internet, replace that hardware stack with Cloudflare’s network. We could deliver capacity at a scale that would be impossible for nearly any company to build themselves. We deployed data centers in over 200 cities around the world that help us reach users wherever they are.

We built a unique network to let sites scale how they secured infrastructure on the Internet with their own growth. But internally, businesses and their employees were stuck using their own private networks.

Just as we helped organizations secure their infrastructure by replacing boxes, we can do the same for their teams and their data. Today, we’re announcing a new platform that applies our network, and everything we’ve learned, to make the Internet faster and safer for teams.
Cloudflare for Teams protects enterprises, devices, and data by securing every connection without compromising user performance. The speed, reliability and protection we brought to securing infrastructure is extended to everything your team does on the Internet.

The legacy world of corporate security

Organizations all share three problems they need to solve at the network level:

  1. Secure team member access to internally managed applications
  2. Secure team members from threats on the Internet
  3. Secure the corporate data that lives in both environments

Each of these challenges pose a real risk to any team. If any component is compromised, the entire business becomes vulnerable.

Internally managed applications

Solving the first bucket, internally managed applications, started by building a perimeter around those internal resources. Administrators deployed applications on a private network and users outside of the office connected to them with client VPN agents through VPN appliances that lived back on-site.

Users hated it, and they still do, because it made it harder to get their jobs done. A sales team member traveling to a customer visit in the back of a taxi had to start a VPN client on their phone just to review details about the meeting. An engineer working remotely had to sit and wait as every connection they made to developer tools was backhauled  through a central VPN appliance.

Administrators and security teams also had issues with this model. Once a user connects to the private network, they’re typically able to reach multiple resources without having to prove they’re authorized to do so . Just because I’m able to enter the front door of an apartment building, doesn’t mean I should be able to walk into any individual apartment. However, on private networks, enforcing additional security within the bounds of the private network required complicated microsegmentation, if it was done at all.

Threats on the Internet

The second challenge, securing users connecting to SaaS tools on the public Internet and applications in the public cloud, required security teams to protect against known threats and potential zero-day attacks as their users left the castle and moat.

How did most companies respond? By forcing all traffic leaving branch offices or remote users back through headquarters and using the same hardware that secured their private network to try and build a perimeter around the Internet, at least the Internet their users accessed. All of the Internet-bound traffic leaving a branch office in Asia, for example, would be sent back through a central location in Europe, even if the destination was just down the street.

Organizations needed those connections to be stable, and to prioritize certain functions like voice and video, so they paid carriers to support dedicated multi-protocol label switching (MPLS) links. MPLS delivered improved performance by applying label switching to traffic which downstream routers can forward without needing to perform an IP lookup, but was eye-wateringly expensive.

Securing data

The third challenge, keeping data safe, became a moving target. Organizations had to keep data secure in a consistent way as it lived and moved between private tools on corporate networks and SaaS applications like Salesforce or Office 365.

The answer? More of the same. Teams backhauled traffic over MPLS links to a place where data could be inspected, adding more latency and introducing more hardware that had to be maintained.

What changed?

The balance of internal versus external traffic began to shift as SaaS applications became the new default for small businesses and Fortune 500s alike. Users now do most of their work on the Internet, with tools like Office 365 continuing to gain adoption. As those tools become more popular, more data leaves the moat and lives on the public Internet.

User behavior also changed. Users left the office and worked from multiple devices, both managed and unmanaged. Teams became more distributed and the perimeter was stretched to its limit.

This caused legacy approaches to fail

Legacy approaches to corporate security pushed the  castle and moat model further out. However, that model simply cannot scale with how users do work on the Internet today.

Internally managed applications

Private networks give users headaches, but they’re also a constant and complex chore to maintain. VPNs require expensive equipment that must be upgraded or expanded and, as more users leave the office, that equipment must try and scale up.

The result is a backlog of IT help desk tickets as users struggle with their VPN and, on the other side of the house, administrators and security teams try to put band-aids on the approach.

Threats on the Internet

Organizations initially saved money by moving to SaaS tools, but wound up spending more money over time as their traffic increased and bandwidth bills climbed.

Additionally, threats evolve. The traffic sent back to headquarters was secured with static models of scanning and filtering using hardware gateways. Users were still vulnerable to new types of threats that these on-premise boxes did not block yet.

Securing data

The cost of keeping data secure in both environments also grew. Security teams attempted to inspect Internet-bound traffic for threats and data loss by backhauling branch office traffic through on-premise hardware, degrading speed and increasing bandwidth fees.

Even more dangerous, data now lived permanently outside of that castle and moat model. Organizations were now vulnerable to attacks that bypassed their perimeter and targeted SaaS applications directly.

How will Cloudflare solve these problems?

Cloudflare for Teams consists of two products, Cloudflare Access and Cloudflare Gateway.

Security on the Internet with Cloudflare for Teams

We launched Access last year and are excited to bring it into Cloudflare for Teams. We built Cloudflare Access to solve the first challenge that corporate security teams face: protecting internally managed applications.

Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network.

Deploying Access does not require exposing new holes in corporate firewalls. Teams connect their resources through a secure outbound connection, Argo Tunnel, which runs in your infrastructure to connect the applications and machines to Cloudflare. That tunnel makes outbound-only calls to the Cloudflare network and organizations can replace complex firewall rules with just one: disable all inbound connections.

Administrators then build rules to decide who should authenticate to and reach the tools protected by Access. Whether those resources are virtual machines powering business operations or internal web applications, like Jira or iManage, when a user needs to connect, they pass through Cloudflare first.

When users need to connect to the tools behind Access, they are prompted to authenticate with their team’s SSO and, if valid, are instantly connected to the application without being slowed down. Internally-managed apps suddenly feel like SaaS products, and the login experience is seamless and familiar

Behind the scenes, every request made to those internal tools hits Cloudflare first where we enforce identity-based policies. Access evaluates and logs every request to those apps for identity, to give administrators more visibility and to offer more security than a traditional VPN.

Security on the Internet with Cloudflare for Teams

Every Cloudflare data center, in 200 cities around the world, performs the entire authentication check. Users connect faster, wherever they are working, versus having to backhaul traffic to a home office.

Access also saves time for administrators. Instead of configuring complex and error-prone network policies, IT teams build policies that enforce authentication using their identity provider. Security leaders can control who can reach internal applications in a single pane of glass and audit comprehensive logs from one source.

In the last year, we’ve released features that expand how teams can use Access so they can fully eliminate their VPN. We’ve added support for RDP, SSH, and released support for short-lived certificates that replace static keys. However, teams also use applications that do not run in infrastructure they control, such as SaaS applications like Box and Office 365. To solve that challenge, we’re releasing a new product, Cloudflare Gateway.

Security on the Internet with Cloudflare for Teams

Cloudflare Gateway secures teams by making the first destination a Cloudflare data center located near them, for all outbound traffic. The product places Cloudflare’s global network between users and the Internet, rather than forcing the Internet through legacy hardware on-site.

Cloudflare Gateway’s first feature begins by preventing users from running into phishing scams or malware sites by combining the world’s fastest DNS resolver with Cloudflare’s threat intelligence. Gateway resolver can be deployed to office networks and user devices in a matter of minutes. Once configured, Gateway actively blocks potential malware and phishing sites while also applying content filtering based on policies administrators configure.

However, threats can be hidden in otherwise healthy hostnames. To protect users from more advanced threats, Gateway will audit URLs and, if enabled, inspect  packets to find potential attacks before they compromise a device or office network. That same deep packet inspection can then be applied to prevent the accidental or malicious export of data.

Organizations can add Gateway’s advanced threat prevention in two models:

  1. by connecting office networks to the Cloudflare security fabric through GRE tunnels and
  2. by distributing forward proxy clients to mobile devices.
Security on the Internet with Cloudflare for Teams

The first model, delivered through Cloudflare Magic Transit, will give enterprises a way to migrate to Gateway without disrupting their current workflow. Instead of backhauling office traffic to centralized on-premise hardware, teams will point traffic to Cloudflare over GRE tunnels. Once the outbound traffic arrives at Cloudflare, Gateway can apply file type controls, in-line inspection, and data loss protection without impacting connection performance. Simultaneously, Magic Transit protects a corporate IP network from inbound attacks.

When users leave the office, Gateway’s client application will deliver the same level of Internet security. Every connection from the device will pass through Cloudflare first, where Gateway can apply threat prevention policies. Cloudflare can also deliver that security without compromising user experience, building on new technologies like the WireGuard protocol and integrating features from Cloudflare Warp, our popular individual forward proxy.

In both environments, one of the most common vectors for attacks is still the browser. Zero-day threats can compromise devices by using the browser as a vehicle to execute code.

Existing browser isolation solutions attempt to solve this challenge in one of two approaches: 1) pixel pushing and 2) DOM reconstruction. Both approaches lead to tradeoffs in performance and security. Pixel pushing degrades speed while also driving up the cost to stream sessions to users. DOM reconstruction attempts to strip potentially harmful content before sending it to the user. That tactic relies on known vulnerabilities and is still exposed to the zero day threats that isolation tools were meant to solve.

Cloudflare Gateway will feature always-on browser isolation that not only protects users from zero day threats, but can also make browsing the Internet faster. The solution will apply a patented approach to send vector commands that a browser can render without the need for an agent on the device. A user’s browser session will instead run in a Cloudflare data center where Gateway destroys the instance at the end of each session, keeping malware away from user devices without compromising performance.

When deployed, remote browser sessions will run in one of Cloudflare’s 200 data centers, connecting users to a faster, safer model of navigating the Internet without the compromises of legacy approaches. If you would like to learn more about this approach to browser isolation, I'd encourage you to read Darren Remington's blog post on the topic.

Why Cloudflare?

To make infrastructure safer, and web properties faster, Cloudflare built out one of the world’s largest and most sophisticated networks. Cloudflare for Teams builds on that same platform, and all of its unique advantages.

Fast

Security should always be bundled with performance. Cloudflare’s infrastructure products delivered better protection while also improving speed. That’s possible because of the network we’ve built, both its distribution and how the data we have about the network allows Cloudflare to optimize requests and connections.

Cloudflare for Teams brings that same speed to end users by using that same network and route optimization. Additionally, Cloudflare has built industry-leading components that will become features of this new platform. All of these components leverage Cloudflare’s network and scale to improve user performance.

Gateway’s DNS-filtering features build on Cloudflare’s 1.1.1.1 public DNS resolver, the world’s fastest resolver according to DNSPerf. To protect entire connections, Cloudflare for Teams will deploy the same technology that underpins Warp, a new type of VPN with consistently better reviews than competitors.

Massive scalability

Cloudflare’s 30 TBps of network capacity can scale to meet the needs of nearly any enterprise. Customers can stop worrying about buying enough hardware to meet their organization’s needs and, instead, replace it with Cloudflare.

Near users, wherever they are — literally

Cloudflare’s network operates in 200 cities and more than 90 countries around the world, putting Cloudflare’s security and performance close to users, wherever they work.

That network includes presence in global headquarters, like London and New York, but also in traditionally underserved regions around the world.

Cloudflare data centers operate within 100 milliseconds of 99% of Internet-connected population in the developed world, and within 100 milliseconds of 94% of the Internet-connected population globally. All of your end users should feel like they have the performance traditionally only available to those in headquarters.

Easier for administrators

When security products are confusing, teams make mistakes that become incidents. Cloudflare’s solution is straightforward and easy to deploy. Most security providers in this market built features first and never considered usability or implementation.

Cloudflare Access can be deployed in less than an hour; Gateway features will build on top of that dashboard and workflow. Cloudflare for Teams brings the same ease-of-use of our tools that protect infrastructure to the products that new secure users, devices, and data.

Better threat intelligence

Cloudflare’s network already secures more than 20 million Internet properties and blocks 72 billion cyber threats each day. We build products using the threat data we gather from protecting 11 million HTTP requests per second on average.

What’s next?

Cloudflare Access is available right now. You can start replacing your team’s VPN with Cloudflare’s network today. Certain features of Cloudflare Gateway are available in beta now, and others will be added in beta over time. You can sign up to be notified about Gateway now.

07:00

Cloudflare + Remote Browser Isolation [The Cloudflare Blog]

Cloudflare + Remote Browser Isolation

Cloudflare announced today that it has purchased S2 Systems Corporation, a Seattle-area startup that has built an innovative remote browser isolation solution unlike any other currently in the market. The majority of endpoint compromises involve web browsers — by putting space between users’ devices and where web code executes, browser isolation makes endpoints substantially more secure. In this blog post, I’ll discuss what browser isolation is, why it is important, how the S2 Systems cloud browser works, and how it fits with Cloudflare’s mission to help build a better Internet.

What’s wrong with web browsing?

It’s been more than 30 years since Tim Berners-Lee wrote the project proposal defining the technology underlying what we now call the world wide web. What Berners-Lee envisioned as being useful for “several thousand people, many of them very creative, all working toward common goals[1] has grown to become a fundamental part of commerce, business, the global economy, and an integral part of society used by more than 58% of the world’s population[2].

The world wide web and web browsers have unequivocally become the platform for much of the productive work (and play) people do every day. However, as the pervasiveness of the web grew, so did opportunities for bad actors. Hardly a day passes without a major new cybersecurity breach in the news. Several contributing factors have helped propel cybercrime to unprecedented levels: the commercialization of hacking tools, the emergence of malware-as-a-service, the presence of well-financed nation states and organized crime, and the development of cryptocurrencies which enable malicious actors of all stripes to anonymously monetize their activities.

The vast majority of security breaches originate from the web. Gartner calls the public Internet a “cesspool of attacks” and identifies web browsers as the primary culprit responsible for 70% of endpoint compromises.[3] This should not be surprising. Although modern web browsers are remarkable, many fundamental architectural decisions were made in the 1990’s before concepts like security, privacy, corporate oversight, and compliance were issues or even considerations. Core web browsing functionality (including the entire underlying WWW architecture) was designed and built for a different era and circumstances.

In today’s world, several web browsing assumptions are outdated or even dangerous. Web browsers and the underlying server technologies encompass an extensive – and growing – list of complex interrelated technologies. These technologies are constantly in flux, driven by vibrant open source communities, content publishers, search engines, advertisers, and competition between browser companies. As a result of this underlying complexity, web browsers have become primary attack vectors. According to Gartner, “the very act of users browsing the internet and clicking on URL links opens the enterprise to significant risk. […] Attacking thru the browser is too easy, and the targets too rich.[4] Even “ostensibly ‘good’ websites are easily compromised and can be used to attack visitors” (Gartner[5]) with more than 40% of malicious URLs found on good domains (Webroot[6]). (A complete list of vulnerabilities is beyond the scope of this post.)

The very structure and underlying technologies that power the web are inherently difficult to secure. Some browser vulnerabilities result from illegitimate use of legitimate functionality: enabling browsers to download files and documents is good, but allowing downloading of files infected with malware is bad; dynamic loading of content across multiple sites within a single webpage is good, but cross-site scripting is bad; enabling an extensive advertising ecosystem is good, but the inability to detect hijacked links or malicious redirects to malware or phishing sites is bad; etc.

Enterprise Browsing Issues

Enterprises have additional challenges with traditional browsers.

Paradoxically, IT departments have the least amount of control over the most ubiquitous app in the enterprise – the web browser. The most common complaints about web browsers from enterprise security and IT professionals are:

  1. Security (obviously). The public internet is a constant source of security breaches and the problem is growing given an 11x escalation in attacks since 2016 (Meeker[7]). Costs of detection and remediation are escalating and the reputational damage and financial losses for breaches can be substantial.
  2. Control. IT departments have little visibility into user activity and limited ability to leverage content disarm and reconstruction (CDR) and data loss prevention (DLP) mechanisms including when, where, or who downloaded/upload files.
  3. Compliance. The inability to control data and activity across geographies or capture required audit telemetry to meet increasingly strict regulatory requirements. This results in significant exposure to penalties and fines.

Given vulnerabilities exposed through everyday user activities such as email and web browsing, some organizations attempt to restrict these activities. As both are legitimate and critical business functions, efforts to limit or curtail web browser use inevitably fail or have a substantive negative impact on business productivity and employee morale.

Current approaches to mitigating security issues inherent in browsing the web are largely based on signature technology for data files and executables, and lists of known good/bad URLs and DNS addresses. The challenge with these approaches is the difficulty of keeping current with known attacks (file signatures, URLs and DNS addresses) and their inherent vulnerability to zero-day attacks. Hackers have devised automated tools to defeat signature-based approaches (e.g. generating hordes of files with unknown signatures) and create millions of transient websites in order to defeat URL/DNS blacklists.

While these approaches certainly prevent some attacks, the growing number of incidents and severity of security breaches clearly indicate more effective alternatives are needed.

What is browser isolation?

The core concept behind browser isolation is security-through-physical-isolation to create a “gap” between a user’s web browser and the endpoint device thereby protecting the device (and the enterprise network) from exploits and attacks. Unlike secure web gateways, antivirus software, or firewalls which rely on known threat patterns or signatures, this is a zero-trust approach.

There are two primary browser isolation architectures: (1) client-based local isolation and (2) remote isolation.

Local browser isolation attempts to isolate a browser running on a local endpoint using app-level or OS-level sandboxing. In addition to leaving the endpoint at risk when there is an isolation failure, these systems require significant endpoint resources (memory + compute), tend to be brittle, and are difficult for IT to manage as they depend on support from specific hardware and software components.

Further, local browser isolation does nothing to address the control and compliance issues mentioned above.

Remote browser isolation (RBI) protects the endpoint by moving the browser to a remote service in the cloud or to a separate on-premises server within the enterprise network:

  • On-premises isolation simply relocates the risk from the endpoint to another location within the enterprise without actually eliminating the risk.
  • Cloud-based remote browsing isolates the end-user device and the enterprise’s network while fully enabling IT control and compliance solutions.

Given the inherent advantages, most browser isolation solutions – including S2 Systems – leverage cloud-based remote isolation. Properly implemented, remote browser isolation can protect the organization from browser exploits, plug-ins, zero-day vulnerabilities, malware and other attacks embedded in web content.

How does Remote Browser Isolation (RBI) work?

In a typical cloud-based RBI system (the blue-dashed box ❶ below), individual remote browsers ❷ are run in the cloud as disposable containerized instances – typically, one instance per user. The remote browser sends the rendered contents of a web page to the user endpoint device ❹ using a specific protocol and data format ❸. Actions by the user, such as keystrokes, mouse and scroll commands, are sent back to the isolation service over a secure encrypted channel where they are processed by the remote browser and any resulting changes to the remote browser webpage are sent back to the endpoint device.

Cloudflare + Remote Browser Isolation

In effect, the endpoint device is “remote controlling” the cloud browser. Some RBI systems use proprietary clients installed on the local endpoint while others leverage existing HTML5-compatible browsers on the endpoint and are considered ‘clientless.’

Data breaches that occur in the remote browser are isolated from the local endpoint and enterprise network. Every remote browser instance is treated as if compromised and terminated after each session. New browser sessions start with a fresh instance. Obviously, the RBI service must prevent browser breaches from leaking outside the browser containers to the service itself. Most RBI systems provide remote file viewers negating the need to download files but also have the ability to inspect files for malware before allowing them to be downloaded.

A critical component in the above architecture is the specific remoting technology employed by the cloud RBI service. The remoting technology has a significant impact on the operating cost and scalability of the RBI service, website fidelity and compatibility, bandwidth requirements, endpoint hardware/software requirements and even the user experience. Remoting technology also determines the effective level of security provided by the RBI system.

All current cloud RBI systems employ one of two remoting technologies:

(1)    Pixel pushing is a video-based approach which captures pixel images of the remote browser ‘window’ and transmits a sequence of images to the client endpoint browser or proprietary client. This is similar to how remote desktop and VNC systems work. Although considered to be relatively secure, there are several inherent challenges with this approach:

  • Continuously encoding and transmitting video streams of remote webpages to user endpoint devices is very costly. Scaling this approach to millions of users is financially prohibitive and logistically complex.
  • Requires significant bandwidth. Even when highly optimized, pushing pixels is bandwidth intensive.
  • Unavoidable latency results in an unsatisfactory user experience. These systems tend to be slow and generate a lot of user complaints.
  • Mobile support is degraded by high bandwidth requirements compounded by inconsistent connectivity.
  • HiDPI displays may render at lower resolutions. Pixel density increases exponentially with resolution which means remote browser sessions (particularly fonts) on HiDPI devices can appear fuzzy or out of focus.

(2) DOM reconstruction emerged as a response to the shortcomings of pixel pushing. DOM reconstruction attempts to clean webpage HTML, CSS, etc. before forwarding the content to the local endpoint browser. The underlying HTML, CSS, etc., are reconstructed in an attempt to eliminate active code, known exploits, and other potentially malicious content. While addressing the latency, operational cost, and user experience issues of pixel pushing, it introduces two significant new issues:

  • Security. The underlying technologies – HTML, CSS, web fonts, etc. – are the attack vectors hackers leverage to breach endpoints. Attempting to remove malicious content or code is like washing mosquitos: you can attempt to clean them, but they remain inherent carriers of dangerous and malicious material. It is impossible to identify, in advance, all the means of exploiting these technologies even through an RBI system.
  • Website fidelity. Inevitably, attempting to remove malicious active code, reconstructing HTML, CSS and other aspects of modern websites results in broken pages that don’t render properly or don’t render at all. Websites that work today may not work tomorrow as site publishers make daily changes that may break DOM reconstruction functionality. The result is an infinite tail of issues requiring significant resources in an endless game of whack-a-mole. Some RBI solutions struggle to support common enterprise-wide services like Google G Suite or Microsoft Office 365 even as malware laden web email continues to be a significant source of breaches.
Cloudflare + Remote Browser Isolation

Customers are left to choose between a secure solution with a bad user experience and high operating costs, or a faster, much less secure solution that breaks websites. These tradeoffs have driven some RBI providers to implement both remoting technologies into their products. However, this leaves customers to pick their poison without addressing the fundamental issues.

Given the significant tradeoffs in RBI systems today, one common optimization for current customers is to deploy remote browsing capabilities to only the most vulnerable users in an organization such as high-risk executives, finance, business development, or HR employees. Like vaccinating half the pupils in a classroom, this results in a false sense of security that does little to protect the larger organization.

Unfortunately, the largest “gap” created by current remote browser isolation systems is the void between the potential of the underlying isolation concept and the implementation reality of currently available RBI systems.

S2 Systems Remote Browser Isolation

S2 Systems remote browser isolation is a fundamentally different approach based on S2-patented technology called Network Vector Rendering (NVR).

The S2 remote browser is based on the open-source Chromium engine on which Google Chrome is built. In addition to powering Google Chrome which has a ~70% market share[8], Chromium powers twenty-one other web browsers including the new Microsoft Edge browser.[9] As a result, significant ongoing investment in the Chromium engine ensures the highest levels of website support, compatibility and a continuous stream of improvements.

A key architectural feature of the Chromium browser is its use of the Skia graphics library. Skia is a widely-used cross-platform graphics engine for Android, Google Chrome, Chrome OS, Mozilla Firefox, Firefox OS, FitbitOS, Flutter, the Electron application framework and many other products. Like Chromium, the pervasiveness of Skia ensures ongoing broad hardware and platform support.

Cloudflare + Remote Browser Isolation
Skia code fragment

Everything visible in a Chromium browser window is rendered through the Skia rendering layer. This includes application window UI such as menus, but more importantly, the entire contents of the webpage window are rendered through Skia. Chromium compositing, layout and rendering are extremely complex with multiple parallel paths optimized for different content types, device contexts, etc. The following figure is an egregious simplification for illustration purposes of how S2 works (apologies to Chromium experts):

Cloudflare + Remote Browser Isolation

S2 Systems NVR technology intercepts the remote Chromium browser’s Skia draw commands ❶, tokenizes and compresses them, then encrypts and transmits them across the wire ❷ to any HTML5 compliant web browser ❸ (Chrome, Firefox, Safari, etc.) running locally on the user endpoint desktop or mobile device. The Skia API commands captured by NVR are pre-rasterization which means they are highly compact.

On first use, the S2 RBI service transparently pushes an NVR WebAssembly (Wasm) library ❹ to the local HTML5 web browser on the endpoint device where it is cached for subsequent use. The NVR Wasm code contains an embedded Skia library and the necessary code to unpack, decrypt and “replay” the Skia draw commands from the remote RBI server to the local browser window. A WebAssembly’s ability to “execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms[10] results in near-native drawing performance.

The S2 remote browser isolation service uses headless Chromium-based browsers in the cloud, transparently intercepts draw layer output, transmits the draw commands efficiency and securely over the web, and redraws them in the windows of local HTML5 browsers. This architecture has a number of technical advantages:

(1)    Security: the underlying data transport is not an existing attack vector and customers aren’t forced to make a tradeoff between security and performance.

(2)    Website compatibility: there are no website compatibility issues nor long tail chasing evolving web technologies or emerging vulnerabilities.

(3)    Performance: the system is very fast, typically faster than local browsing (subject of a future blog post).

(4)    Transparent user experience: S2 remote browsing feels like native browsing; users are generally unaware when they are browsing remotely.

(5)    Requires less bandwidth than local browsing for most websites. Enables advanced caching and other proprietary optimizations unique to web browsers and the nature of web content and technologies.

(6)    Clientless: leverages existing HTML5 compatible browsers already installed on user endpoint desktop and mobile devices.

(7)    Cost-effective scalability: although the details are beyond the scope of this post, the S2 backend and NVR technology have substantially lower operating costs than existing RBI technologies. Operating costs translate directly to customer costs. The S2 system was designed to make deployment to an entire enterprise and not just targeted users (aka: vaccinating half the class) both feasible and attractive for customers.

(8)    RBI-as-a-platform: enables implementation of related/adjacent services such as DLP, content disarm & reconstruction (CDR), phishing detection and prevention, etc.

S2 Systems Remote Browser Isolation Service and underlying NVR technology eliminates the disconnect between the conceptual potential and promise of browser isolation and the unsatisfying reality of current RBI technologies.

Cloudflare + S2 Systems Remote Browser Isolation

Cloudflare’s global cloud platform is uniquely suited to remote browsing isolation. Seamless integration with our cloud-native performance, reliability and advanced security products and services provides powerful capabilities for our customers.

Our Cloudflare Workers architecture enables edge computing in 200 cities in more than 90 countries and will put a remote browser within 100 milliseconds of 99% of the Internet-connected population in the developed world. With more than 20 million Internet properties directly connected to our network, Cloudflare remote browser isolation will benefit from locally cached data and builds on the impressive connectivity and performance of our network. Our Argo Smart Routing capability leverages our communications backbone to route traffic across faster and more reliable network paths resulting in an average 30% faster access to web assets.

Once it has been integrated with our Cloudflare for Teams suite of advanced security products, remote browser isolation will provide protection from browser exploits, zero-day vulnerabilities, malware and other attacks embedded in web content. Enterprises will be able to secure the browsers of all employees without having to make trade-offs between security and user experience. The service will enable IT control of browser-conveyed enterprise data and compliance oversight. Seamless integration across our products and services will enable users and enterprises to browse the web without fear or consequence.

Cloudflare’s mission is to help build a better Internet. This means protecting users and enterprises as they work and play on the Internet; it means making Internet access fast, reliable and transparent. Reimagining and modernizing how web browsing works is an important part of helping build a better Internet.


[1] https://www.w3.org/History/1989/proposal.html

[2] “Internet World Stats,”https://www.internetworldstats.com/, retrieved 12/21/2019.

[3] “Innovation Insight for Remote Browser Isolation,” (report ID: G00350577) Neil MacDonald, Gartner Inc, March 8, 2018”

[4] Gartner, Inc., Neil MacDonald, “Innovation Insight for Remote Browser Isolation”, 8 March 2018

[5] Gartner, Inc., Neil MacDonald, “Innovation Insight for Remote Browser Isolation”, 8 March 2018

[6] “2019 Webroot Threat Report: Forty Percent of Malicious URLs Found on Good Domains”, February 28, 2019

[7] “Kleiner Perkins 2018 Internet Trends”, Mary Meeker.

[8] https://www.statista.com/statistics/544400/market-share-of-internet-browsers-desktop/, retrieved December 21, 2019

[9] https://en.wikipedia.org/wiki/Chromium_(web_browser), retrieved December 29, 2019

[10] https://webassembly.org/, retrieved December 30, 2019

Monday, 06 January

08:30

Saturday Morning Breakfast Cereal - First Date [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I've been married 10 years, and I'm still not comfortable asking for this.


Today's News:

03:00

Most read articles in 2019 not from 2019 [Fedora Magazine]

Some topics are very popular, no matter when they’re first mentioned. And Fedora Magazine has a few articles that have proven to be popular for a long time.

You’re reading the last article from the “best of 2019” series. But this time, it’s about articles written before 2019, but being very popular in 2019.

All of the articles below have been checked and updated to be correct even now, in early 2020. Let’s dive in!

i3 tiling window manager

Wish to try an alternative desktop? The following article introduces i3 — a tiling window manager that doesn’t require high-end hardware, but is powerful and highly customizable. You’ll learn about the installation process, some initial setup, and a few tricks to get you started.

Powerline

Would you like to have your shell a bit more organized? Then you might want to try Powerline — a utility that gives you status information, and some visual tweaks to your shell to make it more pleasant and organized.

Monospace fonts

Do you spend a lot of your time in terminal or a code editor? And is your font making you happy? Discover some beautiful monospace fonts available in the Fedora repositories.

Image viewers

Is the default image viewer on your desktop not working the way you want? The following article shows 17 image viewers available in Fedora — varying from simpler to ones full of features.

Fedora as a VirtualBox guest

Love Fedora but your machine runs Windows or macOS? One option to get Fedora running on your machine is virtualization. Your system keeps running and you’ll be able to access Fedora at the same time in a virtual machine. The following article introduces VirtualBox that can do just that.

Sunday, 05 January

08:20

Saturday Morning Breakfast Cereal - Cute [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Anyone who wants to animate the last panel's eyebrows will be awarded Ten Internet Points immediately.


Today's News:

Saturday, 04 January

10:00

Cloudflare Expanded to 200 Cities in 2019 [The Cloudflare Blog]

Cloudflare Expanded to 200 Cities in 2019
Cloudflare Expanded to 200 Cities in 2019

We have exciting news: Cloudflare closed out the decade by reaching our 200th city* across 90+ countries. Each new location increases the security, performance, and reliability of the 20-million-plus Internet properties on our network. Over the last quarter, we turned up seven data centers spanning from Chattogram, Bangladesh all the way to the Hawaiian Islands:

  • Chattogram & Dhaka, Bangladesh. These data centers are our first in Bangladesh, ensuring that its 161 million residents will have a better experience on our network.
  • Honolulu, Hawaii, USA. Honolulu is one of the most remote cities in the world; with our Honolulu data center up and running, Hawaiian visitors can be served 2,400 miles closer than ever before! Hawaii is a hub for many submarine cables in the Pacific, meaning that some Pacific Islands will also see significant improvements.
  • Adelaide, Australia. Our 7th Australasian data center can be found “down under” in the capital of South Australia. Despite being Australia’s fifth-largest city, Adelaide is often overlooked for Australian interconnection. We, for one, are happy to establish a presence in it and its unique UTC+9:30 time zone!
  • Thimphu, Bhutan. Bhutan is the seventh SAARC (South Asian Association for Regional Cooperation) country with a Cloudflare network presence. Thimphu is our first Bhutanese data center, continuing our mission of security and performance for all.
  • St George’s, Grenada. Our Grenadian data center is joining the Grenada Internet Exchange (GREX), the first non-profit Internet Exchange (IX) in the English-speaking Caribbean.

We’ve come a long way since our launch in 2010, moving from colocating in key Internet hubs to fanning out across the globe and partnering with local ISPs. This has allowed us to offer security, performance, and reliability to Internet users in all corners of the world. In addition to the 35 cities we added in 2019, we expanded our existing data centers behind-the-scenes. We believe there are a lot of opportunities to harness in 2020 as we look to bring our network and its edge-computing power closer and closer to everyone on the Internet.

*Includes cities where we have data centers with active Internet ports and those where we are configuring our servers to handle traffic for more customers (at the time of publishing).

05:04

Saturday Morning Breakfast Cereal - Coffee [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I need to do a book of just comics that end with God laughing.


Today's News:

Friday, 03 January

06:24

03:30

Sailfish SDK 3.0 is now available [Jolla Blog]

This new release contains several updates for the entire SDK system. Some of the changes are already visible through the interface within this update, but more will become available in future releases building on the enabling features we’ve included in this release. An example of these upcoming changes is the possibility to support different kinds of virtualization technologies for the build engine and the emulators.

Command line interface

Our command line tool (sfdk), which we already introduced in version 2.2, receives an upgrade in this release. As a result of these changes it is now possible to use the SDK within a continuous integration environment.

For users who are comfortable using Qt Creator you can continue using it as before. However, if you want to script parts of the development process, or if you’re just happiest working from the command line, then sfdk provides important benefits. We’ll look briefly at some of the things you can do below to give you a taste.

Non-interactive SDK installation

You can install the SDK non-interactively (and even on a headless system) using the following syntax:


./SailfishSDK-3.0.7-linux64-offline.run --verbose non-interactive=1 accept-licenses=1 --platform minimal

After your packages have been built, it is also possible to uninstall the SDK noninteractively:

~/SailfishOS/SDKMaintenanceTool --verbose non-interactive=1 -platform minimal

Building packages

Those users who have used the mb2 tool in our Platform SDK will find the usage really familiar:
cd ~/src/myproject
~/SailfishOS/bin/sfdk build

Quite often that’s really all that there is to it! If you have created your project using the wizard in Qt Creator, it should already have the necessary .spec and .pro files in place. It is also possible to create an empty project using the sfdk tool:

mkdir mynewapp
cd mynewapp
~/SailfishOS/bin/sfdk init -t qtquick
~/SailfishOS/bin/sfdk build

And there’s more!

You can also manage the build engine, install new build targets, control the emulator, deploy packages to a device etc. all through the command line. Have a look at the help for instructions:
~/SailfishOS/bin/sfdk --help-all

The detailed release notes for SDK 3 are in together.jolla.com

We hope you enjoy using the new SDK tools and we look forward to bringing you the other improvements we’ve been working on in the future.

The post Sailfish SDK 3.0 is now available appeared first on Jolla Blog.

01:00

Tracking Translations with Transtats [Fedora Magazine]

Translation is an important step in software localization which helps make software more popular globally, and impacts international user experience. In recent years, localization processes have been evolving worldwide to become more continuous, faster, efficient with automation. In Fedora, the development of the Zanata platform and its plugins, then Transtats, and now the migration to the Weblate platform are part of this common ongoing goal. The localization of a desktop OS like Fedora is highly complex because it depends on many factors of the individual upstream projects which are packaged in Fedora. For example, different translation timelines, resources, and tooling.

What is Transtats?

Transtats is a web application which tries to tie up upstream repositories, translation platforms, build system, and product release schedule together to solve problems of mismatch, out-of-sync conditions and to assist the timely packaging of quality translations. Actually, it collects translation data, analyzes them, and creates meaningful representations.

Fedora Transtats is hosted at https://transtats.fedoraproject.org/

How to see the translation status of my package?

Just select Packages tab from left hand side navigation bar. This takes us to the packages list view. Then, search for the package and click on its name.

For example anaconda. On package details page, locate following:

Here, we have translation statistics from translation platform: Zanata and Koji build system. Syncs with the platform and build system are scheduled, which update differences periodically. Languages in red color indicate that there are translated strings remaining in the Translation Platform to be pulled and packaged, whereas, blue denote translated messages could not make 100% in the built package.

String breakage (or changes?)

In translation of software packages, one of the challenges is to prevent string breakage. Package maintainers should strive to abide by the scheduled Fedora release String Freeze. However, in some circumstances it could be necessary to break the string freeze and to inform the translation team on the mailing list. As well as, to update latest translation template (POT) file in the translation platform. Just in case these actions seem missing – translators may get new strings to translate very late or the application may have some strings untranslated. In the worst case, an outdated translation string mismatch may result in a crash. Sync and automation pipelines are there to prevent this, nevertheless it depends on the push or pull methods followed by package developers or maintainers.

To deal with the same context, we can use a job template in Transtats to detect this string change – particularly useful after string freeze in Fedora release schedule. This would be really helpful for the folks who look for packaging translations without string breakage, keeping translation template (POT) file in sync with translation platform, and testing localized form of the application for translation completeness to back trace.

How to detect string changes?

One of the options in Jobs tab is ‘YML based Jobs’. Where we can see available job templates.

The jobs framework executes all the tasks mentioned in the YAML, create appropriate logs and store results. Track String Change job basically:

  1. Clones the source repository of respective package.
  2. Tries to generate translation template (POT) file.
  3. Downloads POT file from respective translation platform.
  4. And, finds differences between both the POT files.

Actually, Transtats maintains mapping of upstream repository, Translation Platform project and respective build tag for every package.

Let’s take a closer look into this YAML. We can provide value for %PACKAGE_NAME% and %RELEASE_SLUG% in the next step – Set Values! For example: anaconda and fedora-32. Furthermore, a couple of things seek attention are:

  • In case the upstream software repository maintains separate git branch for fedora release, please edit ‘branch: master’ to ‘branch: <fedora-release-branch>’
  • In ‘generate’ block, mention the command to generate POT file. Default one should work for ‘intltool-update’ only, however, many packages do have their own.
  • A few packages may have gettext domain name different than that of package name. If this is the case, mention the gettext domain too.

As soon as the job is triggered, logs should be populated. If this is not a scratch run, a unique URL shall also be created at the end.

Left hand side is the input YAML and right hand side is respective log for each task. Here we can find the differences and figure out string mismatch.

In Transtats, we can create solutions to different problems in the form of job templates. And, scheduling of these jobs could be a step towards automation.

Thursday, 02 January

11:13

Saturday Morning Breakfast Cereal - Wisdom [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
She is like a ghost who is very concerned about purchasing organic food products.


Today's News:

Wednesday, 01 January

Tuesday, 31 December

12:13

Adopting a new approach to HTTP prioritization [The Cloudflare Blog]

Adopting a new approach to HTTP prioritization
Adopting a new approach to HTTP prioritization

Friday the 13th is a lucky day for Cloudflare for many reasons. On December 13, 2019 Tommy Pauly, co-chair of the IETF HTTP Working Group, announced the adoption of the "Extensible Prioritization Scheme for HTTP" - a new approach to HTTP prioritization.

Web pages are made up of many resources that must be downloaded before they can be presented to the user. The role of HTTP prioritization is to load the right bytes at the right time in order to achieve the best performance. This is a collaborative process between client and server, a client sends priority signals that the server can use to schedule the delivery of response data. In HTTP/1.1 the signal is basic, clients order requests smartly across a pool of about 6 connections. In HTTP/2 a single connection is used and clients send a signal per request, as a frame, which describes the relative dependency and weighting of the response. HTTP/3 tried to use the same approach but dependencies don't work well when signals can be delivered out of order.

HTTP/3 is being standardised as part of the QUIC effort. As a Working Group (WG) we've been trying to fix the problems that non-deterministic ordering poses for HTTP priorities. However, in parallel some of us have been working on an alternative solution, the Extensible Prioritization Scheme, which fixes problems by dropping dependencies and using an absolute weighting. This is signalled in an HTTP header field meaning it can be backported to work with HTTP/2 or carried over HTTP/1.1 hops. The alternative proposal is documented in the Individual-Draft draft-kazuho-httpbis-priority-04, co-authored by Kazuho Oku (Fastly) and myself. This has now been adopted by the IETF HTTP WG as the basis of further work; It's adopted name will be draft-ietf-httpbis-priority-00.

To some extent document adoption is the end of one journey and the start of the next; sometimes the authors of the original work are not the best people to oversee the next phase. However, I'm pleased to say that Kazuho and I have been selected as co-editors of this new document. In this role we will reflect the consensus of the WG and help steward the next chapter of HTTP prioritization standardisation. Before the next journey begins in earnest, I wanted to take the opportunity to share my thoughts on the story of developing the alternative prioritization scheme through 2019.

I'd love to explain all the details of this new approach to HTTP prioritization but the truth is I expect the standardization process to refine the design and for things to go stale quickly. However, it doesn't hurt to give a taste of what's in store, just be aware that it is all subject to change.

A recap on priorities

The essence of HTTP prioritization comes down to trying to download many things over constrained connectivity. To borrow some text from Pat Meenan: Web pages are made up of dozens (sometimes hundreds) of separate resources that are loaded and assembled by a browser into the final displayed content. Since it is not possible to download everything immediately, we prefer to fetch more important things before less important ones. The challenge comes in signalling the importance from client to server.

In HTTP/2, every connection has a priority tree that expresses the relative importance between requests. Servers use this to determine how to schedule sending response data. The tree starts with a single root node and as requests are made they either depend on the root or each other. Servers may use the tree to decide how to schedule sending resources but clients cannot force a server to behave in any particular way.

To illustrate, imagine a client that makes three simple GET requests that all depend on root. As the server receives each request it grows its view of the priority tree:

Adopting a new approach to HTTP prioritization
The server starts with only the root node of the priority tree. As requests arrive, the tree grows. In this case all requests depend on the root, so the requests are priority siblings.

Once all requests are received, the server determines all requests have equal priority and that it should send response data using round-robin scheduling: send some fraction of response 1, then a fraction of response 2, then a fraction of response 3, and repeat until all responses are complete.

A single HTTP/2 request-response exchange is made up of frames that are sent on a stream. A simple GET request would be sent using a single HEADERS frame:

Adopting a new approach to HTTP prioritization
HTTP/2 HEADERS frame, Each region of a frame is a named field

Each region of a frame is a named field, a '?' indicates the field is optional and the value in parenthesis is the length in bytes with '*' meaning variable length. The Header Block Fragment field holds compressed HTTP header fields (using HPACK), Pad Length and Padding relate to optional padding, and E, Stream Dependency and Weight combined are the priority signal that controls the priority tree.

The Stream Dependency and Weight fields are optional but their absence is interpreted as a signal to use the default values; dependency on the root with a weight of 16 meaning that the default priority scheduling strategy is round-robin . However, this is often a bad choice because important resources like HTML, CSS and JavaScript are tied up with things like large images. The following animation demonstrates this in the Edge browser, causing the page to be blank for 19 seconds. Our deep dive blog post explains the problem further.

Adopting a new approach to HTTP prioritization

The HEADERS frame E field is the interesting bit (pun intended). A request with the field set to 1 (true) means that the dependency is exclusive and nothing else can depend on the indicated node. To illustrate, imagine a client that sends three requests which set the E field to 1. As the server receives each request, it interprets this as an exclusive dependency on the root node. Because all requests have the same dependency on root, the tree has to be shuffled around to satisfy the exclusivity rules.

Adopting a new approach to HTTP prioritization
Each request has an exclusive dependency on the root node. The tree is shuffled as each request is received by the server.

The final version of the tree looks very different from our previous example. The server would schedule all of response 3, then all of response 2, then all of response 1. This could help load all of an HTML file before an image and thus improve the visual load behaviour.

In reality, clients load a lot more than three resources and use a mix of priority signals. To understand the priority of any single request, we need to understand all requests. That presents some technological challenges, especially for servers that act like proxies such as the Cloudflare edge network. Some servers have problems applying prioritization effectively.

Because not all clients send the most optimal priority signals we were motivated to develop Cloudflare's Enhanced HTTP/2 Prioritization, announced last May during Speed Week. This was a joint project between the Speed team (Andrew Galloni, Pat Meenan, Kornel Lesiński) and Protocols team (Nick Jones, Shih-Chiang Chien) and others. It replaces the complicated priority tree with a simpler scheme that is well suited to web resources. Because the feature is implemented on the server side, we avoid requiring any modification of clients or the HTTP/2 protocol itself. Be sure to check out my colleague Nick's blog post that details some of the technical challenges and changes needed to let our servers deliver smarter priorities.

The Extensible Prioritization Scheme proposal

The scheme specified in draft-kazuho-httpbis-priority-04, defines a way for priorities to be expressed in absolute terms. It replaces HTTP/2's dependency-based relative prioritization, the priority of a request is independent of others, which makes it easier to reason about and easier to schedule.

Rather than send the priority signal in a frame, the scheme defines an HTTP header - tentatively named "Priority" - that can carry an urgency on a scale of 0 (highest) to 7 (lowest). For example, a client could express the priority of an important resource by sending a request with:

Priority: u=0

And a less important background resource could be requested with:

Priority: u=7

While Kazuho and I are the main authors of this specification, we were inspired by several ideas in the Internet community, and we have incorporated feedback or direct input from many of our peers in the Internet community over several drafts. The text today reflects the efforts-so-far of cross-industry work involving many engineers and researchers including organizations such Adobe, Akamai, Apple, Cloudflare, Fastly, Facebook, Google, Microsoft, Mozilla and UHasselt. Adoption in the HTTP Working Group means that we can help improve the design and specification by spending some IETF time and resources for broader discussion, feedback and implementation experience.

The backstory

I work in Cloudflare's Protocols team which is responsible for terminating HTTP at the edge. We deal with things like TCP, TLS, QUIC, HTTP/1.x, HTTP/2 and HTTP/3 and since joining the company I've worked with Alessandro Ghedini, Junho Choi and Lohith Bellad to make QUIC and HTTP/3 generally available last September.

Working on emerging standards is fun. It involves an eclectic mix of engineering, meetings, document review, specification writing, time zones, personalities, and organizational boundaries. So while working on the codebase of quiche, our open source implementation of QUIC and HTTP/3, I am also mulling over design details of the protocols and discussing them in cross-industry venues like the IETF.

Because of HTTP/3's lineage, it carries over a lot of features from HTTP/2 including the priority signals and tree described earlier in the post.

One of the key benefits of HTTP/3 is that it is more resilient to the effect of lossy network conditions on performance; head-of-line blocking is limited because requests and responses can progress independently. This is, however, a double-edged sword because sometimes ordering is important. In HTTP/3 there is no guarantee that the requests are received in the same order that they were sent, so the priority tree can get out of sync between client and server. Imagine a client that makes two requests that include priority signals stating request 1 depends on root, request 2 depends on request 1. If request 2 arrives before request 1, the dependency cannot be resolved and becomes dangling. In such a case what is the best thing for a server to do? Ambiguity in behaviour leads to assumptions and disappointment. We should try to avoid that.

Adopting a new approach to HTTP prioritization
Request 1 depends on root and request 2 depends on request 1. If an HTTP/3 server receives request 2 first, the dependency cannot be resolved.

This is just one example where things get tricky quickly. Unfortunately the WG kept finding edge case upon edge case with the priority tree model. We tried to find solutions but each additional fix seemed to create further complexity to the HTTP/3 design. This is a problem because it makes it hard to implement a server that handles priority correctly.

In parallel to Cloudflare's work on implementing a better prioritization for HTTP/2, in January 2019 Pat posted his proposal for an alternative prioritization scheme for HTTP/3 in a message to the IETF HTTP WG.

Arguably HTTP/2 prioritization never lived up to its hype. However, replacing it with something else in HTTP/3 is a challenge because the QUIC WG charter required us to try and maintain parity between the protocols. Mark Nottingham, co-chair of the HTTP and QUIC WGs responded with a good summary of the situation. To quote part of that response:

My sense is that people know that we need to do something about prioritisation, but we're not yet confident about any particular solution. Experimentation with new schemes as HTTP/2 extensions would be very helpful, as it would give us some data to work with. If you'd like to propose such an extension, this is the right place to do it.

And so started a very interesting year of cross-industry discussion on the future of HTTP prioritization.

A year of prioritization

The following is an account of my personal experiences during 2019. It's been a busy year and there may be unintentional errors or omissions, please let me know if you think that is the case. But I hope it gives you a taste of the standardization process and a look behind the scenes of how new Internet protocols that benefit everyone come to life.

January

Pat's email came at the same time that I was attending the QUIC WG Tokyo interim meeting hosted at Akamai (thanks to Mike Bishop for arrangements). So I was able to speak to a few people face-to-face on the topic. There was a bit of mailing list chatter but it tailed off after a few days.

February to April

Things remained quiet in terms of prioritization discussion. I knew the next best opportunity to get the ball rolling would be the HTTP Workshop 2019 held in April. The workshop is a multi-day event not associated with a standards-defining-organization (even if many of the attendees also go to meetings such as the IETF or W3C). It is structured in a way that allows the agenda to be more fluid than a typical standards meeting and gives plenty of time for organic conversation. This sometimes helps overcome gnarly problems, such as the community finding a path forward for WebSockets over HTTP/2 due to a productive discussion during the 2017 workshop. HTTP prioritization is a gnarly problem, so I was inspired to pitch it as a talk idea. It was selected and you can find the full slide deck here.

During the presentation I recounted the history of HTTP prioritization. The great thing about working on open standards is that many email threads, presentation materials and meeting materials are publicly archived. It's fun digging through this history. Did you know: HTTP/2 is based on SPDY and inherited its weight-based prioritization scheme, the tree-based scheme we are familiar with today was only introduced in draft-ietf-httpbis-http2-11? One of the reasons for the more-complicated tree was to help HTTP intermediaries (a.k.a. proxies) implement clever resource management. However, it became clear during the discussion that no intermediaries implement this, and none seem to plan to. I also explained a bit more about Pat's alternative scheme and Nick described his implementation experiences. Despite some interesting discussion around the topic however, we didn't come to any definitive solution. There were a lot of other interesting topics to discover that week.

May

In early May, Ian Swett (Google) restarted interest in Pat's mailing list thread. Unfortunately he was not present at the HTTP Workshop so had some catching up to do. A little while later Ian submitted a Pull Request to the HTTP/3 specification called "Strict Priorities". This incorporated Pat's proposal and attempted to fix a number of those prioritization edge cases that I mentioned earlier.

In late May, another QUIC WG interim meeting was held in London at the new Cloudflare offices, here is the view from the meeting room window. Credit to Alessandro for handling the meeting arrangements.

Mike, the editor of the HTTP/3 specification presented some of the issues with prioritization and we attempted to solve them with the conventional tree-based scheme. Ian, with contribution from Robin Marx (UHasselt), also presented an explanation about his "Strict Priorities" proposal. I recommend taking a look at Robin's priority tree visualisations which do a great job of explaining things. From that presentation I particularly liked "The prioritization spectrum", it's a concise snapshot of the state of things at that time:

Adopting a new approach to HTTP prioritization
An overview of HTTP/3 prioritization issues, fixes and possible alternatives. Presented by Ian Swett at the QUIC Interim Meeting May 2019.

June and July

Following the interim meeting, the prioritization "debate" continued electronically across GitHub and email. Some time in June Kazuho started work on a proposal that would use a scheme similar to Pat and Ian's absolute priorities. The major difference was that rather than send the priority signal in an HTTP frame, it would use a header field. This isn't a new concept, Roy Fielding proposed something similar at IETF 83.

In HTTP/2 and HTTP/3 requests are made up of frames that are sent on streams. Using a simple GET request as an example: a client sends a HEADERS frame that contains the scheme, method, path, and other request header fields. A server responds with a HEADERS frame that contains the status and response header fields, followed by DATA frame(s) that contain the payload.

To signal priority, a client could also send a PRIORITY frame. In the tree-based scheme the frame carries several fields that express dependencies and weights. Pat and Ian's proposals changed the contents of the PRIORITY frame. Kazuho's proposal encodes the priority as a header field that can be carried in the HEADERS frame as normal metadata, removing the need for the PRIORITY frame altogether.

I liked the simplification of Kazuho's approach and the new opportunities it might create for application developers. HTTP/2 and HTTP/3 implementations (in particular browsers) abstract away a lot of connection-level details such as stream or frames. That makes it hard to understand what is happening or to tune it.

The lingua franca of the Web is HTTP requests and responses, which are formed of header fields and payload data. In browsers, APIs such as Fetch and Service Worker allow handling of these primitives. In servers, there may be ways to interact with the primitives via configuration or programming languages. As part of Enhanced HTTP/2 Prioritization, we have exposed prioritization to Cloudflare Workers to allow rich behavioural customization. If a Worker adds the "cf-priority" header to a response, Cloudflare’s edge servers use the specified priority to serve the response. This might be used to boost the priority of a resource that is important to the load time of a page. To help inform this decision making, the incoming browser priority signal is encapsulated in the request object passed to a Worker's fetch event listener (request.cf.requestPriority).

Standardising approaches to problems is part of helping to build a better Internet. Because of the resonance between Cloudflare's work and Kazuho's proposal, I asked if he would consider letting me come aboard as a co-author. He kindly accepted and on July 8th we published the first version as an Internet-Draft.

Meanwhile, Ian was helping to drive the overall prioritization discussion and proposed that we use time during IETF 105 in Montreal to speak to a wider group of people. We kicked off the week with a short presentation to the HTTP WG from Ian, and Kazuho and I presented our draft in a side-meeting that saw a healthy discussion. There was a realization that the concepts of prioritization scheme, priority signalling and server resource scheduling (enacting prioritization) were conflated and made effective communication and progress difficult. HTTP/2's model was seen as one aspect, and two different I-Ds were created to deprecate it in some way (draft-lassey-priority-setting, draft-peon-httpbis-h2-priority-one-less). Martin Thomson (Mozilla) also created a Pull Request that simply removed the PRIORITY frame from HTTP/3.

To round off the week, in the second HTTP session it was decided that there was sufficient interest in resolving the prioritization debate via the creation of a design team. I joined the team led by Ian Swett along with others from Adobe, Akamai, Apple, Cloudflare, Fastly, Facebook, Google, Microsoft, and UHasselt.

August to October

Martin's PR generated a lot of conversation. It was merged under proviso that some solution be found before the HTTP/3 specification was finalized. Between May and August we went from something very complicated (e.g. Orphan placeholder, with PRIORITY only on control stream, plus exclusive priorities) to a blank canvas. The pressure was now on!

The design team held several teleconference meetings across the months. Logistics are a bit difficult when you have team members distributed across West Coast America, East Coast America, Western Europe, Central Europe, and Japan. However, thanks to some late nights and early mornings we managed to all get on the call at the same time.

In October most of us travelled to Cupertino, CA to attend another QUIC interim meeting hosted at Apple's Infinite Loop (Eric Kinnear helping with arrangements).  The first two days of the meeting were used for interop testing and were loosely structured, so the design team took the opportunity to hold the first face-to-face meeting. We made some progress and helped Ian to form up some new slides to present later in the week. Again, there was some useful discussion and signs that we should put some time in the agenda in IETF 106.

November

The design team came to agreement that draft-kazuho-httpbis-priority was a good basis for a new prioritization scheme. We decided to consolidate the various I-Ds that had sprung up during IETF 105 into the document, making it a single source that was easier for people to track progress and open issues if required. This is why, even though Kazuho and I are the named authors, the document reflects a broad input from the community. We published draft 03 in November, just ahead of the deadline for IETF 106 in Singapore.

Many of us travelled to Singapore ahead of the actual start of IETF 106. This wasn't to squeeze in some sightseeing (sadly) but rather to attend the IETF Hackathon. These are events where engineers and researchers can really put the concept of "running code" to the test. I really enjoy attending and I'm grateful to Charles Eckel and the team that organised it. If you'd like to read more about the event, Charles wrote up a nice blog post that, through some strange coincidence, features a picture of me, Kazuho and Robin talking at the QUIC table.

The design team held another face-to-face during a Hackathon lunch break and decided that we wanted to make some tweaks to the design written up in draft 03. Unfortunately the freeze was still in effect so we could not issue a new draft. Instead, we presented the most recent thinking to the HTTP session on Monday where Ian put forward draft-kazuho-httpbis-priority as the group's proposed design solution. Ian and Robin also shared results of prioritization experiments. We received some great feedback in the meeting and during the week pulled out all the stops to issue a new draft 04 before the next HTTP session on Thursday. The question now was: Did the WG think this was suitable to adopt as the basis of an alternative prioritization scheme? I think we addressed a lot of the feedback in this draft and there was a general feeling of support in the room. However, in the IETF consensus is declared via mailing lists and so Tommy Pauly, co-chair of the HTTP WG, put out a Call for Adoption on November 21st.

December

In the Cloudflare London office, preparations begin for mince pie acquisition and assessment.

The HTTP priorities team played the waiting game and watched the mailing list discussion. On the whole people supported the concept but there was one topic that divided opinion. Some people loved the use of headers to express priorities, some people didn't and wanted to stick to frames.

On December 13th Tommy announced that the group had decided to adopt our document and assign Kazuho and I as editors. The header/frame divide was noted as something that needed to be resolved.

The next step of the journey

Just because the document has been adopted does not mean we are done. In some ways we are just getting started. Perfection is often the enemy of getting things done and so sometimes adoption occurs at the first incarnation of a "good enough" proposal.

Today HTTP/3 has no prioritization signal. Without priority information there is a small danger that servers pick a scheduling strategy that is not optimal, that could cause the web performance of HTTP/3 to be worse than HTTP/2. To avoid that happening we'll refine and complete the design of the Extensible Priority Scheme. To do so there are open issues that we have to resolve, we'll need to square the circle on headers vs. frames, and we'll no doubt hit unknown unknowns. We'll need the input of the WG to make progress and their help to document the design that fits the need, and so I look forward to continued collaboration across the Internet community.

2019 was quite a ride and I'm excited to see what 2020 brings.

If working on protocols is your interest and you like what Cloudflare is doing, please visit our careers page. Our journey isn’t finished, in fact far from it.

09:18

Saturday Morning Breakfast Cereal - Liar [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Why must you mingle lies and cookies in your mouth!


Today's News:

03:14

A new decade for Sailfish OS [Jolla Blog]

As the year 2020 and a new decade are just around the corner I’d like to thank all our partners, customers, community members, and fellow sailors across the world for being part of the world-changing Jolla Sailfish story for another year. Not only was 2019 a good year but the entire decade has also been a wild ride for us together. Sincere thanks for sailing it with us!

Our dear Sailfish OS, and our company Jolla is steadily approaching an age of 10 years. Most of you know the history. Already from the start we had a bold vision of offering the world a transparent, trusted and privacy-preserving independent alternative for the most personal tech device we use to manage our daily lives – the smartphone. This is the vision we’ve been carrying through all stages of the story, from developing and offering Jolla branded devices in the early days, to the licensing business we’ve been pushing for the past few years.

We are mobile and tech enthusiasts who want to build and develop a mobile operating system we want to use ourselves, and to perfect Sailfish OS for our licensing customers. In parallel we’ve created the Sailfish X program to carry on the Jolla device heritage for all you like-minded people who want to be independent from the big players, who cherish privacy and data integrity, and who simply just enjoy being boldly different!

 

Strong partnerships and rigorous deep development

The Sailfish OS product is in great shape, and during this year we have released all together six (6!) new Sailfish 3 software releases and six SDK releases. These releases have included enhancements and major features for security, overall user experience, application handling and redesign of some core Sailfish apps. During 2019 we also released a totally rearchitected version of our unique Android-apps-on-native-Linux app support, now including support for Android 8.1 level apps for compatible Sailfish devices. We also launched support for the Sony Xperia 10, our first device to come with user data encryption enabled by default.

Implementing user data encryption has also been a major effort including changes to many different areas of the device. System startup, alarms, emergency calls, system upgrades, and many other areas were impacted by these changes, which required over 12 months of work.

During 2019 our partnership with Open Mobile Platform (OMP) in Russia advanced significantly, and the project has taken great steps forward. Some of the news is available on OMP’s web page.

 

What to expect in 2020?

Our expectations for the year include new licensing customers, a rapidly growing number of new Sailfish OS users both through our licensing customers and the Sailfish X community program. We will continue to develop the product to match and exceed the needs of our customers, and also to enhance the day-to-day experience for all Sailfish users.

Our 2020 Roadmap includes improving security architecture for UI, privacy-preserving multi-user support, and new enablers for cloud-based services, to name a few. In addition, our product roadmap includes several important updates for Sailfish OS developers. We will be updating our compiler toolchain, working to improve the SDK, and to offer more APIs with documentation to better serve our developers in creating native Sailfish OS apps. We’re also working to enhance communication between all Sailfish OS developers by introducing updates to our online discussion tools.

While development is continuing at an increased pace, we are also working on new licensing partnerships in new countries. These cases take a lot of time to push through, and we will be working hard to bring these up and live during the first half of 2020.

We are now at a great starting point for another Sailfish decade to come. I want to use this opportunity to thank all Sailfish partners and fans for the past decade and wish the best of success to us all. Happy New Year and decade everyone!

Your captain sincerely,
Sami

The post A new decade for Sailfish OS appeared first on Jolla Blog.

Monday, 30 December

09:51

Saturday Morning Breakfast Cereal - App [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you make this, you owe me a hug.


Today's News:

Just 2 weeks left to enter your proposal for BAHFest! This year we have a little budget to fly in non-local people, so submit even if you're non-local! 

01:00

Top articles of 2019: Editors’ choice [Fedora Magazine]

The year is still ending and the perfect time to reflect and look back at some Magazine articles continues. This time, let’s see if the editors chose some interesting ones from 2019. Yes, they did!

Red Hat, IBM, and Fedora

IBM acquired Red Hat in July 2019, and this article discusses how nothing changes for the Fedora project.

Some tips for the Workstation users

Using Fedora Workstation? This article gives you some tips including enhancing photos, coding, or getting more wallpapers right from the repositories.

Fedora and CentOS Stream

In this article, the Fedora Project Leader discusses the CentOS Stream announcement from September 2019 — including the relationship of Fedora, Red Hat Enterprise Linux, and CentOS.

Contribute to Fedora Magazine

Fedora Magazine exists thanks to our great contributors. And you (yes, you!) can become one, too! Contributions include topic proposals, writing, and editorial tasks. This article shows you how to join the team and help people learn about Linux.

Sunday, 29 December

07:40

Saturday Morning Breakfast Cereal - In Love [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The next time someone tells you they want to remember a moment forever, just shout 'dolphin dongs!' as loud as you can.


Today's News:

Saturday, 28 December

09:27

Saturday Morning Breakfast Cereal - Hades [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you'd like the experience of seeing wonderful stuff you desire but just always a bit out of reach, please enjoy browsing Pinterest.


Today's News:

Friday, 27 December

12:30

Happy Holidays! [The Cloudflare Blog]

Happy Holidays!

I joined Cloudflare in July of 2019, but I've known of Cloudflare for years. I always read the blog posts and looked at the way the company was engaging with the community. I also noticed the diversity in the names of many of the blog post authors.

There are over 50 languages spoken at Cloudflare, as we have natives from many countries on our team, with different backgrounds, religions, gender and cultures. And it is this diversity that makes us a great team.

A few days ago I asked one of my colleagues how he would say "Happy Holidays!" in Arabic. When I heard him say it, I instantly got the idea of recording a video in as many languages as possible of our colleagues wishing all of you, our readers and customers, a happy winter season.

It only took one internal message for people to start responding and sending their videos to me. Some did it themselves, others flocked in a meeting room and helped each other record their greeting. It took a few days and some video editing to put together an informal video that was entirely done by the team, to wish you all the best as we close this year and decade.

So here it is: Happy Holidays from all of us at Cloudflare!

Let us know if you speak any of the languages in the video. Or maybe you can tell us how you greet each other, at this time of the year, in your native language.

01:00

Top articles of 2019: For desktop users [Fedora Magazine]

It’s this time of the year again — the time to reflect, and look back at some Fedora Magazine’s most popular articles in 2019. This time it’s all about desktop users. Let’s highlight a few of the many articles written by our great contributors in 2019, focusing on Fedora as a desktop OS.

Dash to Dock extension for Workstation

When you’re serious about your desktop, and perhaps using many applications, you might want to see what’s going on at all times. Or at least the icons. The article below shows you how to have a dock at the bottom of your screen, with all your apps — both running and favourites — visible at all times.

Tweaking the look of Workstation with themes

When you like how your Linux desktop works, but not so much how it looks, there is a solution. The following article shows you how to tweak the look of your windows, icons, the mouse cursor, and the whole environment as well — all that within GNOME, the Workstation’s default environment.

i3 with multiple monitors

One of the great things about Linux desktop is the never ending possibilities of customisation. And that includes window managers, too! The following article shows how to use one of the very popular ones — i3 — with multiple monitors.

IceWM

If you’re looking for speed, simplicity, and getting out of the user’s way, you might like IceWM. The following article introduces this minimal window manager, and helps you install it, too, should you be interested.

Stay tuned for even more upcoming “Best of 2019” articles. All of us at the Magazine hope you have a relaxing holiday season, and wish you a happy new year.

Thursday, 26 December

Wednesday, 25 December

01:00

Best of 2019: Fedora for developers [Fedora Magazine]

With the end of the year approaching fast, it is a good time to look back at 2019 and go through the most popular articles on Fedora Magazine written by our contributors.

In this article of the “Best of 2019” series, we are looking at developers and how to use Fedora to be a great developer workstation

Make your Python code look good with Black on Fedora

Black made quite a big impact in the Python ecosystem this year. The project is now part of the Python Software Foundation and it is used by many different projects. So if you write or maintain some Python code and want to stop having to care about code style and code formatting you should check out this article.

How to run virtual machines with virt-manager

Setting up a development environment, running integration tests, testing a new feature, or running an older version of software for all these use cases being able to create and run a virtual machine is a must have knowledge for a developer. This article will walk you through how you can achieve that using virt-manager on your Fedora workstation.

Jupyter and data science in Fedora

With the rise of Data science and machine learning, the Jupyter IDE has become of very popular choice to share or present a program and its results. This article goes into the details of installing and using Jupyter and the different libraries and tools useful for data science.

Building Smaller Container Images

Fedora provides different container images, one of which is a minimal base image. The following article demonstrate how one can use this image to build smaller container images.

Getting Started with Go on Fedora

In 2019 the Go programming language turned 10 year old. In ten years the language has managed to become the default choice for cloud native applications and the cloud ecosystems. Fedora is providing an easy way to start developing in Go, this article takes you through the first step needed to get started.

Stay tuned to the Magazine for other upcoming “Best of 2019” categories. All of us at the Magazine hope you have a great end of year and holiday season.

Tuesday, 24 December

11:04

This holiday's biggest online shopping day was... Black Friday [The Cloudflare Blog]

This holiday's biggest online shopping day was... Black Friday

What’s the biggest day of the holiday season for holiday shopping? Black Friday, the day after US Thanksgiving, has been embraced globally as the day retail stores announce their sales. But it was believed that the following Monday, dubbed “Cyber Monday,” may be even bigger. Or, with the explosion of reliable 2-day and even 1-day shipping, maybe another day closer to Christmas has taken the crown. At Cloudflare, we aimed to answer this question for the 2019 holiday shopping season.

Black Friday was the biggest online shopping day but the second biggest wasn't Cyber Monday... it was Thanksgiving Day itself (the day before Black Friday!). Cyber Monday was the fourth biggest day.

Here's a look at checkout events seen across Cloudflare's network since before Thanksgiving in the US.

This holiday's biggest online shopping day was... Black Friday
Checkout events as a percentage of checkouts on Black Friday

The weekends are shown in yellow and Black Friday and Cyber Monday are shown in green. You can see that checkouts ramped up during Thanksgiving week and then continued through the weekend into Cyber Monday.

Black Friday had twice the number of checkouts as the preceding Friday and the entire Thanksgiving week dominates. Post-Cyber Monday, no day reached 50% of the number of checkouts we saw on Black Friday. And Cyber Monday was just 60% of Black Friday.

So, Black Friday is the peak day but Thanksgiving Day is the runner up. Perhaps it deserves its own moniker: Thrifty Thursday anyone?

Checkouts occur more frequently from Monday to Friday and then drop off over the weekend.  After Cyber Monday only one other day showed an interesting peak. Looking at last week it does appear that Tuesday, December 17 was the pre-Christmas peak for online checkouts. Perhaps fast online shipping made consumers feel they could use online shopping as long as they got their purchases by the weekend before Christmas.

Happy Holidays from everyone at Cloudflare!

Monday, 23 December

01:00

Best of 2019: Fedora for system administrators [Fedora Magazine]

The end of the year is a perfect time to look back on some of the Magazine’s most popular articles of 2019. One of the Fedora operating systems’s many strong points is its wide array of tools for system administrators. As your skills progress, you’ll find that the Fedora OS has even more to offer. And because Linux is the sysadmin’s best friend, you’ll always be in good company. In 2019, there were quite a few articles about sysadmin tools our readers enjoyed. Here’s a sampling.

Introducing Fedora CoreOS

If you follow modern IT topics, you know that containers are a hot topic — and containers mean Linux. This summer brought the first preview release of Fedora CoreOS. This new edition of Fedora can run containerized workloads. You can use it to deploy apps and services in a modern way.

InitRAMFS, dracut and the dracut emergency shell

To be a good sysadmin, you need to understand system startup and the boot process. From time to time, you’ll encounter software errors, configuration problems, or other issues that keep your system from starting normally. With the information in the article below, you can do some life-saving surgery on your system, and restore it to working order.

How to reset your root password

Although this article was published a few years ago, it continues to be one of the most popular. Apparently, we’re not the only people who sometimes get locked out of our own system! If this happens to you, and you need to reset the root password, the article below should do the trick.

Systemd: unit dependencies and order

This article is part of an entire series on systemd, the modern system and process manager in Fedora and other distributions. As you may know, systemd has sophisticated but easy to use methods to start up or shut own services in the right order. This article shows you how they work. That way you can apply the right options to unit files you create for systemd.

Setting kernel command line arguments

Fedora 30 introduced new ways to change the boot options for your kernel. This article from Laura Abbott on the Fedora kernel team explains the new Bootloader Spec (BLS). It also tells you how to use it to set options on your kernel for boot time.

Stay tuned to the Magazine for other upcoming “Best of 2019” categories. All of us at the Magazine hope you have a great end of year and holiday season.

Sunday, 22 December

Friday, 20 December

14:49

First Half 2019 Transparency Report and an Update on a Warrant Canary [The Cloudflare Blog]

First Half 2019 Transparency Report and an Update on a Warrant Canary

Today, we are releasing Cloudflare’s transparency report for the first half of 2019. We recognize the importance of keeping the reports current, but It’s taken us a little longer than usual to put it together. We have a few notable updates.

First Half 2019 Transparency Report and an Update on a Warrant Canary

Pulling a warrant canary

Since we issued our very first transparency report in 2014, we’ve maintained a number of commitments - known as warrant canaries - about what actions we will take and how we will respond to certain types of law enforcement requests. We supplemented those initial commitments earlier this year, so that our current warrant canaries state that Cloudflare has never:

  1. Turned over our encryption or authentication keys or our customers' encryption or authentication keys to anyone.
  2. Installed any law enforcement software or equipment anywhere on our network.
  3. Terminated a customer or taken down content due to political pressure*
  4. Provided any law enforcement organization a feed of our customers' content transiting our network.
  5. Modified customer content at the request of law enforcement or another third party.
  6. Modified the intended destination of DNS responses at the request of law enforcement or another third party.
  7. Weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.

These commitments serve as a statement of values to remind us what is important to us as a company, to convey not only what we do, but what we believe we should do. For us to maintain these commitments. We have to believe not only that we’ve met them in the past, but that we can continue to meet them.

Unfortunately, there is one warrant canary that no longer meets the test for remaining on our website. After Cloudlfare terminated the Daily Stormer’s service in 2017, Matthew observed:

"We're going to have a long debate internally about whether we need to remove the bullet about not terminating a customer due to political pressure. It's powerful to be able to say you've never done something. And, after today, make no mistake, it will be a little bit harder for us to argue against a government somewhere pressuring us into taking down a site they don't like."

We addressed this issue in our subsequent transparency reports by retaining the statement, but adding an asterisk identifying the Daily Stormer debate and the criticism that we had received in the wake of our decision to terminate services. Our goal was to signal that we remained committed to the principle that we should not terminate a customer due to political pressure, while not ignoring the termination. We also sought to be public about the termination and our reasons for the decision, ensuring that it would not go unnoticed.

Although that termination sparked significant debate about whether infrastructure companies making decisions about what content should remain online, we haven’t yet seen politically accountable actors put forth real alternatives to address deeply troubling content and behavior online. Since that time, we’ve seen even more real world consequences from the vitriol and hateful content spread online, from the screeds posted in connection with the terror attacks in Christchurch, Poway and El Paso to the posting of video glorifying those attacks. Indeed, in the absence of true public policy initiatives to address those concerns, the pressure on tech companies -- even deep Internet infrastructure companies like Cloudflare --  to make judgments about what stays online has only increased.  

In August 2019, Cloudflare terminated service to 8chan based on their failure to moderate their hate-filled platform in a way that inspired murderous acts. Although we don’t think removing cybersecurity services to force a site offline is the right public policy approach to the hate festering online, a site’s failure to take responsibility to prevent or mitigate the harm caused by its platform leaves service providers like us with few choices. We’ve come to recognize that the prolonged and persistent lawlessness of others might require action by those further down the technical stack. Although we’d prefer that governments recognize that need, and build mechanisms for due process, if they fail to act, infrastructure companies may be required to take action to prevent harm.

And that brings us back to our warrant canary. If we believe we might have an obligation to terminate customers, even in a limited number of cases, retaining a commitment that we will never terminate a customer “due to political pressure” is untenable. We could, in theory, argue that terminating a lawless customer like 8chan was not a termination “due to political pressure.” But that seems wrong. We shouldn’t be parsing specific words of our commitments to explain to people why we don’t believe we’ve violated the standard.

We remain committed to the principle that providing cybersecurity services to everyone, regardless of content, makes the Internet a better place. Although we’re removing the warrant canary from our website, we believe that to earn and maintain our users’ trust, we must be transparent about the actions we take. We therefore commit to reporting on any action that we take to terminate a user that could be viewed as a termination “due to political pressure.”

UK/US Cloud agreement

As we’ve described previously, governments have been working to find ways to improve law enforcement access to digital evidence across borders. Those efforts resulted in a new U.S. law, the Clarifying Lawful Overseas Use of Data (CLOUD) Act, premised on the idea that law enforcement around the world should be able to get access to electronic content related to their citizens when conducting law enforcement investigations, wherever that data is stored, as long as they are bound by sufficient procedural safeguards to ensure due process.

On October 3, 2019, the US and UK signed the first Executive Agreement under this law. According to the requirements of U.S. law, that Agreement will go into effect in 180 days, in March 2020, unless Congress takes action to block it. There is an ongoing debate as to whether the agreement includes sufficient due process and privacy protections. We’re going to take a wait and see approach, and will closely monitor any requests we receive after the agreement goes into effect.

For the time being, Cloudflare intends to comply with appropriately scoped and targeted requests for data from UK law enforcement, provided that those requests are consistent with the law and international human rights standards. Information about the legal requests that Cloudflare receives from non-U.S. governments pursuant to the CLOUD Act will be included in future transparency reports.