Saturday, 23 February

16:45

Samsung's Newest Phones Read Your Fingerprints With Ultrasonic Sound Waves [Slashdot]

An anonymous reader quotes CNET: The Galaxy S10's in-screen fingerprint scanner may look just like the one on the OnePlus 6T, but don't be fooled. Samsung's flagship Galaxy S10 and S10 Plus are the first phones to use Qualcomm's ultrasonic in-screen fingerprint technology, which uses sound waves to read your print. Related to ultrasound in a doctor's office, this "3D Sonic Sensor" technology works by bouncing sound waves off your skin. It'll capture your details through water, lotion and grease, at night or in bright daylight. Qualcomm also claims it's faster and much more secure than the optical fingerprint sensor you've seen in other phones before this. That's because the ultrasonic reader takes a 3D capture of all the ridges and valleys that make up your skin, compared to a 2D image -- basically a photo -- that an optical reader captures using light, not sound waves.

Read more of this story at Slashdot.

15:50

Amazon Prime Air Cargo Plane Crashes in Texas, Three Dead [Slashdot]

An anonymous reader quotes Weather.com: An Amazon Prime Air cargo plane crashed Saturday afternoon into Trinity Bay near Anahuac, Texas, as it approached Houston's George Bush Intercontinental Airport. Three crew members aboard the plane did not survive the crash, the Chamber County sheriff told WJTV. Air traffic controllers lost radar and radio contact with Atlas Air Flight 3591 shortly before 12:45 p.m. CST. The 767 jetliner was arriving from Miami when the crash occurred 30 miles southeast of the airport, according to a statement by the Federal Aviation Administration.

Read more of this story at Slashdot.

15:32

Coroutines & Modules Added For C++20 [Phoronix]

The ISO C++ committee has wrapped up its winter meeting in Hawaii that also served as the last meeting for approving new features for the upcoming C++20 revision to the C++ programming language...

14:55

Work-In-Progress "DXVK-Native" Allows For Better Wine/System Integration [Phoronix]

There's work-in-progress patches for DXVK and Wine to improve the integration between the two for this Direct3D-on-Vulkan library...

14:39

New Material Can Soak Up Uranium From Seawater [Slashdot]

A new adsorbent material "soaks up uranium from seawater, leaving interfering ions behind," reports the ACS's Chemical & Engineering News, in an article shared by webofslime: The world's oceans contain some 4 billion metric tons of dissolved uranium. That's roughly 1,000 times as much as all known terrestrial sources combined, and enough to fuel the global nuclear power industry for centuries. But the oceans are so vast, and uranium's concentration in seawater is so low -- roughly 3 ppb -- that extracting it remains a formidable challenge... Researchers have been looking for ways to extract uranium from seawater for more than 50 years... Nearly 20 years ago, the Japan Atomic Energy Agency (JAEA) confirmed that amidoxime-functionalized polymers could soak up uranium reliably even under harsh marine conditions. But that type of adsorbent has not been implemented on a large scale because it has a higher affinity for vanadium than uranium. Separating the two ions raises production costs. Alexander S. Ivanov of Oak Ridge National Laboratory, together with colleagues there and at Lawrence Berkeley National Laboratory and other institutions, may have come up with a solution. Using computational methods, the team identified a highly selective triazine chelator known as H2BHT that resembles iron-sequestering compounds found in bacteria and fungi.... H2BHT exhibits little attraction for vanadium but has roughly the same affinity for uranyl ions as amidoxime-based adsorbents do.

Read more of this story at Slashdot.

13:44

Record-Breaking Jet Stream Accelerates Air Travel, Flight Clocks In At 801 MPH [Slashdot]

pgmrdlm quotes CBS News: On Monday night, the river of air 35,000 feet above the New York City area, known as the jet stream, clocked in at a blazing 231 mph. This is the fastest jet stream on record since 1957 for the National Weather Service in Upton, New York — breaking the old record of 223 mph, according to NWS forecaster Carlie Buccola. This wind provided a turbo boost to commercial passenger planes along for the ride. With the help of this rapid tailwind, Virgin Atlantic Flight 8 from Los Angeles to London hit what could be a record high speed for a Boeing 787: 801 mph over Pennsylvania at 9:20 p.m. Monday night... "The typical cruising speed of the Dreamliner is 561 mph," CBS News transportation correspondent Kris Van Cleave points out. "The past record for the 787 is 776 mph set in January 2017 by a Norwegian 787-9 flying from JFK to London Gatwick. That flight set a record for the fastest subsonic transatlantic commercial airline flight -- 5 hours and 13 minutes, thanks to a 202 mph tailwind." FlightAware, a global aviation data services company, reminds CBS that even a 100 mph increase in the jet stream can shorten a flight by an hour.

Read more of this story at Slashdot.

12:51

What Happens When Police License Plate Readers Make Mistakes? [Slashdot]

An anonymous reader writes: The Verge reports that San Francisco Bay Area police "pulled over a California privacy advocate and held him at gunpoint after a database error caused a license plate reader to flag a car as stolen, a lawsuit alleges." Brian Hofer, the chairman of Oakland's Privacy Advisory Commission, was handcuffed and surrounded by multiple police cars, and says a police deputy injured his brother by throwing him to the ground. They were finally released -- 40 minutes later. But ironically, Hofer has been a staunch critic of license plate readers, "which he points out have led to wrongful detentions, invasions of privacy and potentially costly lawsuits." (California bus driver Denise Green was detained at gunpoint when her own car was incorrectly identified as stolen -- leading to a lawsuit which she eventually settled for nearly $500,000.) And at least one thief simply swapped license plates with an innocent driver. The executive director of Northern California Regional Intelligence Center, a state government program, acknowledged that the accuracy rate of the license plate readers is about 90 percent, yet "added that in some cases, the technology has actually exonerated people, or given potential suspects alibis. But there is no way for the public to know just how effective the license plate reader technology is in capturing criminals" -- apparently because police departments aren't capturing that data. Only one of the region's police departments, in Piedmont, California, reported its "efficacy metrics" to the agency -- with 7,500 "hits" which over 11 months led to 28 arrests (and the recovery of 39 cars) after reading 21.3 million license plates. The license plate readers cost $20,000 per patrol car. In Hofer's case, he was driving a rental car which had previously been reported as stolen but then later recovered -- though for some reason the police or rental car agency failed to update their database. But he criticizes the fact that "somebody could pull a gun on your because of an alert that a computer system gave them." "They're just pulling guns and going cowboy on us," Hofer says. "It's a pretty terrifying position to be in.... "This is happening more frequently than it should be. They're not ensuring the accuracy of their data and people's lives are literally at risk."

Read more of this story at Slashdot.

12:25

Redox OS Exploring Coreboot Payload, Other Improvements [Phoronix]

It's been a while since last having anything significant to report on Redox OS, the Unix-like operating system written in the Rust programming language and pursuing a micro-kernel design, but fortunately this open-source OS is still moving along and they have some interesting plans moving forward...

11:34

Virgin Galactic Reaches Space Again In Highest, Fastest Test Flight Yet [Slashdot]

"If you're willing to spend $250,000 for a quick trip to space, that option is getting closer to reality," reports CNN. VSS Unity, Virgin Galactic's rocket-powered plane, climbed to a record altitude of nearly 56 miles during a test flight on Friday, marking the second time Richard Branson's startup has reached space. Two pilots, and for the first time, an additional crew member, were on board. Beth Moses, Galactic's chief astronaut trainer and an aerospace engineer, rode along with the pilots. The trip allowed her to run safety checks and get a first look at what Galactic's customers could one day experience. Moses has logged hundreds of hours on zero gravity aircrafts, and she described the G Forces aboard the supersonic plane as "mildly wild." Some moments were intense, she told CNN Business, but it was never uncomfortable. "I was riveted and I think our customers will be as well." Unity took off from a runway in California's Mojave Desert just after 8 am PT and cruised to about 45,000 feet attached to its mothership before it broke away and fired its rocket motor. The plane then swooped into the upper reaches of the atmosphere, 295,000 feet high, at supersonic speeds. It's top speed was Mach 3. At the peak of its flight path, Unity experienced a few minutes of weightlessness and looked out into the black skies of the cosmos. Moses said she was able to leave her seat and take in the view. "The Earth was beautiful -- super sharp, super clear," she said, "with a gorgeous view of the Pacific mountains." America's Federal Aviation Administration says they'll now award commercial astronaut wings to all three members of the crew, and CNN reports that this second successful test flight suggests Galactic "could be on track" to start flying tourists into space this year. "About 600 people have reserved tickets, priced between $200,000 and $250,000, to fly with Galactic. And the company says it wants to eventually lower prices to broaden its customer base."

Read more of this story at Slashdot.

10:34

Microsoft's Cloud Evangelist Adds 'Clippy' To Their Business Card [Slashdot]

An anonymous reader quotes Business Insider's update on Microsoft Clippy, the animated cartoon paperclip that was Office's virtual assistant until the early 2000s, that "everyone loved to hate." After 18 years, has it become retro chic? When Chloe Condon, a newly hired Microsoft cloud evangelist, ordered new business cards, she avoided the standard corporate look and instead went with Clippy-themed cards and tweeted them out... They've got a picture of Clippy on the front and on the back they say, "It looks like you are trying to get in touch with Chloe," with her contact info listed below... Naturally, the Clippy The Paperclip Twitter account loved these cards. He tweeted, "@chloecondon It looks like you're using my likeness on your new business cards. Would you like help with WAIT I'M ON BUSINESS CARDS NOW?!" And then former Microsoft exec Steven Sinofsky, the man credited for developing Microsoft Office into a massive hit, noticed the cards and tweeted, "I suppose if you live long enough, others will wear your failures as a badge of honor...." After four years of scorn, Clippy was officially retired in 2001. Sinofsky tells Business Insider that the company even issued a funny press release about it.... Microsoft even held an official retirement party for him in San Francisco, too. Sinfosky shared a photo from that party with us... If you look closely, you'll see unemployed Clippy is actually using the party thrown in his honor to collect charity for himself and beg for food.

Read more of this story at Slashdot.

09:34

12-Year-Old Boy Reportedly Builds A Nuclear Fusion Reactor [Slashdot]

An anonymous reader quotes the Guardian: An American 14-year-old has reportedly become the youngest known person in the world to create a successful nuclear reaction. The Open Source Fusor Research Consortium, a hobbyist group, has recognised the achievement by Jackson Oswalt, from Memphis, Tennessee, when he was aged 12 in January 2018.... The enterprising teenager said he transformed an old playroom in his parents' house into a nuclear laboratory with $10,000 (£7,700) worth of equipment that uses 50,000 volts of electricity to heat deuterium gas and fuse the nuclei to release energy. "The start of the process was just learning about what other people had done with their fusion reactors," Jackson told Fox. "After that, I assembled a list of parts I needed. I got those parts off eBay primarily and then oftentimes the parts that I managed to scrounge off of eBay weren't exactly what I needed. So I'd have to modify them to be able to do what I needed to do for my project...." [S]cientists are likely to remain sceptical until Oswalt's workings are subject to verification from an official organisation and are published in an academic journal. Still, the teenager may now have usurped the previous record holder, Taylor Wilson, who works in nuclear energy research after achieving fusion aged 14.

Read more of this story at Slashdot.

09:04

Don't Look For Gentoo's CPU Optimization Options To Land In The Mainline Linux Kernel [Phoronix]

Gentoo's Linux kernel build has long offered various CPU options in allowing those building their distribution to optimize their kernel build to the CPU being used. Every so often the patch is suggested for upstreaming to the mainline Linux kernel before being quickly rejected by the upstream maintainers...

08:34

Redis Changes Its Open Source License -- Again [Slashdot]

"Redis Labs is dropping its Commons Clause license in favor of its new 'available-source' license: Redis Source Available License (RSAL)," reports ZDNet -- adding "This is not an open-source license." Redis Labs had used Commons Clause on top of the open-source Apache License to protect its rights to modules added to its 3-Clause-BSD-licensed Redis, the popular open-source in-memory data structure store. But, as Manish Gupta, Redis Labs' CMO, explained, "It didn't work. Confusion reigned over whether or not the modules were open source. They're not open-source." So, although it hadn't wanted to create a new license, that's what Redis Labs ended up doing.... The RSAL grants, Gupta said, equivalent rights to permissive open-source licenses for the vast majority of users. With the RSAL, developers can: Use the software; modify the source code; integrate it with an application; and use, distribute, support, or sell their application. But -- and this is big -- the RSAL forbids you from using any application built with these modules in a database, a caching engine, a stream processing engine, a search engine, an indexing engine, or a machine learning/artificial intelligence serving engine. In short, all the ways that Redis Labs makes money from Redis. Gupta wants to make it perfectly clear: "We're not calling it open source. It's not." Earlier this month the Open Source Initiative had reaffirmed its commitment to open source's original definition, adding "There is no trust in a world where anyone can invent their own definition for open source, and without trust there is no community, no collaboration, and no innovation." And earlier this week on Twitter a Red Hat open-source evangelist said they wondered whether Redis was just "clueless. There are a lot of folks entering #opensource today who are unwilling to do the research and reading, and assume that these are all new problems."

Read more of this story at Slashdot.

07:55

Saturday Morning Breakfast Cereal - Behavior [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Now that I'm a parent, I'm in favor of giving genes ALL the responsibility.


Today's News:

07:19

OpenSUSE Leap 15.1 Beta Is Running Well - Benchmarks On AMD EPYC Workstation [Phoronix]

With openSUSE Leap 15.1 reaching beta this week I decided to take it for a quick spin of this Linux distribution derived from the same sources as SUSE Linux Enterprise 15 SP1. Here are some quick benchmarks compared to Leap 15.0 as well as the latest rolling-release openSUSE Tumbleweed.

06:27

Habana Labs Goya AI Processor Support Queued For Linux 5.1 [Phoronix]

Published back in January were initial open-source kernel driver patches for Habana Labs' Goya processor intended for accelerating deep learning workloads. This new Habana Labs kernel driver will debut with the mainline Linux 5.1 kernel...

06:00

European Governments Approve Controversial New Copyright Law [Slashdot]

An anonymous reader quotes a report from Ars Technica: A controversial overhaul of Europe's copyright laws overcame a key hurdle on Wednesday as a majority of European governments signaled support for the deal. That sets the stage for a pivotal vote by the European Parliament that's expected to occur in March or April. Supporters of the legislation portray it as a benign overhaul of copyright that will strengthen anti-piracy efforts. Opponents, on the other hand, warn that its most controversial provision, known as Article 13, could force Internet platforms to adopt draconian filtering technologies. The cost to develop filtering technology could be particularly burdensome for smaller companies, critics say. Online service providers have struggled to balance free speech and piracy for close to two decades. Faced with this difficult tradeoff, the authors of Article 13 have taken a rainbows-and-unicorns approach, promising stricter copyright enforcement, no wrongful takedowns of legitimate content, and minimal burdens on smaller technology platforms. But it seems unlikely that any law can achieve all of these objectives simultaneously. And digital-rights groups suspect that users will wind up getting burned -- both due to wrongful takedowns of legitimate content and because the burdens of mandatory filtering will make it harder to start a new online hosting service.

Read more of this story at Slashdot.

05:47

Linux Kernel To Better Fend Off Exploits That Disable SMAP / SMEP / UMIP Protections [Phoronix]

A change made courtesy of Google engineers to the Linux kernel will make it so exploits on Linux have a tougher time trying to disable SMAP and SMEP protections as part of their exploit path...

05:14

NetworkManager 1.16 Approaches With WireGuard VPN Tunnels, WiFi Direct Connections [Phoronix]

The release of NetworkManager 1.16 is right around the corner with this morning's first release candidate...

04:54

Unexpected Ubuntu 16.04.6 LTS Coming Due To APT Security Issue [Phoronix]

No further point releases to Ubuntu 16.04 LTS had been planned, but in light of the recent APT vulnerability, Canonical has decided to issue an Ubuntu 16.04.6 update that will be hitting the mirrors soon...

03:00

President Trump Wants US To Win 5G Through Real Competition [Slashdot]

hackingbear writes: In a tweet, President Trump said he wanted "5G, and even 6G, technology in the United States as soon as possible. I want the United States to win through competition, not by blocking out currently more advanced technologies. American companies must step up their efforts, or get left behind." While he did not specifically mention China's Huawei, many interpreted the comments as Mr Trump taking a softer stance on the firm. The U.S. has been pressuring allies to block out the Chinese telecom giant from their future 5G mobile networks, but the tactic meets considerable resistance. "Mr. President. I cannot agree with you more. Our company is always ready to help build the real 5G network in the U.S., through competition," Huawei President Ken Hu replied in a tweet, mocking Trump's frequent usages of the word "real." Huawei is the second biggest holder of 5G patents after Samsung and the top contributor to the 5G standard, and is setting its sight on 6G.

Read more of this story at Slashdot.

00:00

Japan's Hayabusa 2 Successfully Touches Down On Ryugu Asteroid, Fires Bullet Into Its Surface [Slashdot]

Japan's Hayabusa 2 spacecraft has successfully touched down on the asteroid Ryugu at around 11:30 GMT on Thursday. "Data from the probe showed changes in speed and direction, indicating it had reached the asteroid's surface, according to officials from the Japan Aerospace Exploration Agency (JAXA)," reports The Guardian. From the report: The probe was due to fire a bullet at the Ryugu asteroid, to stir up surface matter, which it will then collect for analysis back on Earth. The asteroid is thought to contain relatively large amounts of organic matter and water from some 4.6 billion years ago when the solar system was born. The complicated procedure took less time than expected and appeared to go without a hitch, said Hayabusa 2 mission manager Makoto Yoshikawa. The spacecraft is seeking to gather 10g of the dislodged debris with an instrument named the Sampler Horn that hangs from its underbelly. Whatever material is collected by the spacecraft will be stored onboard until Hayabusa 2 reaches its landing site in Woomera, South Australia, in 2020 after a journey of more than three billion miles. UPDATE: JAXA says it successfully fired a "bullet" into Ryugu, collecting the disturbed material. "JAXA scientists had expected to find a powdery surface on Ryugu, but tests showed that the asteroid is covered in larger gravel," reports CNN. "As a result the team had to carry out a simulation to test whether the projectile would be capable of disturbing enough material to be collected by [the Sampler Horn]. The team is planning a total of three sampling events over the next few weeks."

Read more of this story at Slashdot.

Friday, 22 February

20:30

Researchers Make Coldest Quantum Gas of Molecules [Slashdot]

An anonymous reader quotes a report from Phys.Org: JILA researchers have made a long-lived, record-cold gas of molecules that follow the wave patterns of quantum mechanics instead of the strictly particle nature of ordinary classical physics. The creation of this gas boosts the odds for advances in fields such as designer chemistry and quantum computing. As featured on the cover of the Feb. 22 issue of Science, the team produced a gas of potassium-rubidium (KRb) molecules at temperatures as low as 50 nanokelvin (nK). That's 50 billionths of a Kelvin, or just a smidge above absolute zero, the lowest theoretically possible temperature. The molecules are in the lowest-possible energy states, making up what is known as a degenerate Fermi gas. In a quantum gas, all of the molecules' properties are restricted to specific values, or quantized, like rungs on a ladder or notes on a musical scale. Chilling the gas to the lowest temperatures gives researchers maximum control over the molecules. The two atoms involved are in different classes: Potassium is a fermion (with an odd number of subatomic components called protons and neutrons) and rubidium is a boson (with an even number of subatomic components). The resulting molecules have a Fermi character. Before now, the coldest two-atom molecules were produced in maximum numbers of tens of thousands and at temperatures no lower than a few hundred nanoKelvin. JILA's latest gas temperature record is much lower than (about one-third of) the level where quantum effects start to take over from classical effects, and the molecules last for a few seconds -- remarkable longevity. These new ultra-low temperatures will enable researchers to compare chemical reactions in quantum versus classical environments and study how electric fields affect the polar interactions, since these newly created molecules have a positive electric charge at the rubidium atom and a negative charge at the potassium atom. Some practical benefits could include new chemical processes, new methods for quantum computing using charged molecules as quantum bits, and new precision measurement tools such as molecular clocks.

Read more of this story at Slashdot.

19:30

Frontier Demands $4,300 Cancellation Fee Despite Horribly Slow Internet [Slashdot]

Frontier Communications reportedly charged a cancellation fee of $4,302.17 to the operator of a one-person business in Wisconsin, even though she switched to a different Internet provider because Frontier's service was frequently unusable. From the report: Candace Lestina runs the Pardeeville Area Shopper, a weekly newspaper and family business that she took over when her mother retired. Before retiring, her mother had entered a three-year contract with Frontier to provide Internet service to the one-room office on North Main Street in Pardeeville. Six months into the contract, Candace Lestina decided to switch to the newly available Charter offering "for better service and a cheaper bill," according to a story yesterday by News 3 Now in Wisconsin. The Frontier Internet service "was dropping all the time," Lestina told the news station. This was a big problem for Lestina, who runs the paper on her own in Pardeeville, a town of about 2,000 people. "I actually am everything. I make the paper, I distribute the paper," she said. Because of Frontier's bad service, "I would have times where I need to send my paper -- I have very strict deadlines with my printer -- and my Internet's out." Lestina figured she'd have to pay a cancellation fee when she switched to Charter's faster cable Internet but nothing near the $4,300 that Frontier later sent her a bill for, the News 3 Now report said. Charter offered to pay $500 toward the early termination penalty, but the fee is still so large that it could "put her out of business," the news report said. [...] Lestina said the early termination fee wasn't fully spelled out in her contract. "Nothing is ever described of what those cancellation fees actually are, which is that you will pay your entire bill for the rest of the contract," she said. Lestina said she pleaded her case to Frontier representatives, without success, even though Frontier had failed to provide a consistent Internet connection. "They did not really care that I was having such severe problems with the service. That does not bother them," she said. Instead of waiving or reducing the cancellation fee, Frontier threatened to send the matter to a collections agency, Lestina said.

Read more of this story at Slashdot.

18:50

NVIDIA Turing-Based GeForce GTX 1660 Ti Launched At $279 [Slashdot]

MojoKid writes: NVIDIA has launched yet another graphics card today based on the company's new Turing GPU. This latest GPU, however, doesn't support NVIDIA's RTX ray-tracing technology or its DLSS (Deep Learning Super Sampling) image quality tech. The new GeForce GTX 1660 Ti does, however, bring with it all of the other GPU architecture improvements NVIDIA Turing offers. The new TU116 GPU on board the GeForce GTX 1660 Ti supports concurrent integer and floating point instructions (rather than serializing integer and FP instructions), and it also has a redesigned cache structure with double the amount of L2 cache versus their predecessors, while its L1 cache has been outfitted with a wider memory bus that ultimately doubles the bandwidth. NVIDIA's TU116 has 1,536 active CUDA cores, which is a decent uptick from the GTX 1060, but less than the current gen RTX 2060. Cards will also come equipped with 6GB of GDDR6 memory at 12 Gbps for 288GB/s of bandwidth. Performance-wise, the new GeForce GTX 1660 Ti is typically slightly faster than a previous gen GeFore GTX 1070, and much faster than a GTX 1060. Cards should be available at retail in the next few days, starting at $279.

Read more of this story at Slashdot.

18:10

Microsoft Workers' Letter Demands Company Drop $479 Million HoloLens Military Contract [Slashdot]

A group of Microsoft workers have addressed top executives in a letter demanding the company drop a controversial contract with the U.S. army. The Verge reports: The workers object to the company taking a $479 million contract last year to supply tech for the military's Integrated Visual Augmentation System, or IVAS. Under the project, Microsoft, the maker of the HoloLens augmented reality headset, could eventually provide more than 100,000 headsets designed for combat and training in the military. The Army has described the project as a way to "increase lethality by enhancing the ability to detect, decide and engage before the enemy." "We are alarmed that Microsoft is working to provide weapons technology to the US Military, helping one country's government 'increase lethality' using tools we built," the workers write in the letter, addressed to CEO Satya Nadella and president Brad Smith. "We did not sign up to develop weapons, and we demand a say in how our work is used." The letter, which organizers say included dozens of employee signatures at publication time, argues Microsoft has "crossed the line into weapons development" with the contract. "Intent to harm is not an acceptable use of our technology," it reads. The workers are demanding the company cancel the contract, stop developing any weapons technology, create a public policy committing to not build weapons technology, and appoint an external ethics review board to enforce the policy. While the letter notes the company has an AI ethics review process called Aether, the workers say it is "not robust enough to prevent weapons development, as the IVAS contract demonstrates." "As employees and shareholders we do not want to become war profiteers," the letter sent today concludes. "To that end, we believe that Microsoft must stop in its activities to empower the U.S. Army's ability to cause harm and violence."

Read more of this story at Slashdot.

18:01

Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo [The Register]

Processor designer says he's right about one thing: The need for end-to-end dev platforms

Linux kernel king Linus Torvalds this week dismissed cross-platform efforts to support his contention that Arm-compatible processors will never dominate the server market.…

17:30

Instagram Code Reveals Public 'Collections' Feature To Take On Pinterest [Slashdot]

An anonymous reader quotes a report from TechCrunch: Instagram is threatening to attack Pinterest just as it files to go public the same way the Facebook-owned app did to Snapchat. Code buried in Instagram for Android shows the company has prototyped an option to create public "Collections" to which multiple users can contribute. Instagram launched private Collections two years ago to let you Save and organize your favorite feed posts. But by allowing users to make Collections public, Instagram would become a direct competitor to Pinterest. Instagram public Collections could spark a new medium of content curation. People could use the feature to bundle together their favorite memes, travel destinations, fashion items, or art. That could cut down on unconsented content stealing that's caused backlash against meme "curators" like F*ckJerry by giving an alternative to screenshotting and reposting other people's stuff. Instead of just representing yourself with your own content, you could express your identity through the things you love -- even if you didn't photograph them yourself. The "Make Collection Public" option was discovered by frequent TechCrunch tipster and reverse engineering specialist Jane Manchun Wong. It's not available to the public, but from the Instagram for Android code, she was able to generate a screenshot of the prototype. It shows the ability to toggle on public visibility for a Collection, and tag contributors who can also add to the Collection. Previously, Collections was always a private, solo feature for organizing your bookmarks gathered through the Instagaram Save feature Instagram launched in late 2016. Currently there's nothing in the Instagram code about users being able to follow each other's Collections, but that would seem like a logical and powerful next step.

Read more of this story at Slashdot.

17:09

Decoding the President, because someone has to: Did Trump just blow up concerted US effort to ban Chinese 5G kit? [The Register]

Contrarian command-in-chief tweets, world scratches head

Comment  President Donald Trump appears to have undermined an increasingly aggressive push by the US government and telcos to pressure the world to shun Chinese equipment in next-generation 5G networks.…

16:50

Once Hailed As Unhackable, Blockchains Are Now Getting Hacked [Slashdot]

schwit1 shares a report from MIT Technology Review: Early last month, the security team at Coinbase noticed something strange going on in Ethereum Classic, one of the cryptocurrencies people can buy and sell using Coinbase's popular exchange platform. Its blockchain, the history of all its transactions, was under attack. An attacker had somehow gained control of more than half of the network's computing power and was using it to rewrite the transaction history. That made it possible to spend the same cryptocurrency more than once -- known as "double spends." The attacker was spotted pulling this off to the tune of $1.1 million. Coinbase claims that no currency was actually stolen from any of its accounts. But a second popular exchange, Gate.io, has admitted it wasn't so lucky, losing around $200,000 to the attacker (who, strangely, returned half of it days later). Just a year ago, this nightmare scenario was mostly theoretical. But the so-called 51% attack against Ethereum Classic was just the latest in a series of recent attacks on blockchains that have heightened the stakes for the nascent industry. [...] In short, while blockchain technology has been long touted for its security, under certain conditions it can be quite vulnerable. Sometimes shoddy execution can be blamed, or unintentional software bugs. Other times it's more of a gray area -- the complicated result of interactions between the code, the economics of the blockchain, and human greed. That's been known in theory since the technology's beginning. Now that so many blockchains are out in the world, we are learning what it actually means -- often the hard way.

Read more of this story at Slashdot.

16:10

YouTube Is Heading For Its Cambridge Analytica Moment [Slashdot]

Earlier this week, Disney, Nestle and others pulled its advertising spending from YouTube after a blogger detailed how comments on Google's video site were being used to facilitate a "soft-core pedophilia ring." Some of the videos involved ran next to ads placed by Disney and Nestle. With the company facing similar problems over the years, often being "caught in a game of whack-a-mole to fix them," Matt Rosoff from CNBC writes that it's only a matter of time until YouTube faces a scandal that actually alienates users, as happened with Facebook in the Cambridge Analytica scandal. From the report: To be fair, YouTube has taken concrete steps to fix some problems. A couple of years ago, major news events were targets for scammers to post misleading videos about them, like videos claiming shootings such as the one in Parkland, Florida, were staged by crisis actors. In January, the company said it would stop recommending such videos, effectively burying them. It also favors "authoritative" sources in search results around major news events, like mainstream media organizations. And YouTube is not alone in struggling to fight inappropriate content that users upload to its platform. The problem isn't really about YouTube, Facebook or any single company. The problem is the entire business model around user-generated content, and the whack-a-mole game of trying to stay one step ahead of people who abuse it. [T]ech platforms that rely on user-generated content are protected by the 1996 Communications Decency Act, which says platform providers cannot be held liable for material users post on them. It made sense at the time -- the internet was young, and forcing start-ups to monitor their comments sections (remember comments sections?) would have exploded their expenses and stopped growth before it started. Even now, when some of these companies are worth hundreds of billions of dollars, holding them liable for user-generated content would blow up these companies' business models. They'd disappear, reduce services or have to charge fees for them. Voters might not be happy if Facebook went out of business or they suddenly had to start paying $20 a month to use YouTube. Similarly, advertiser boycotts tend to be short-lived -- advertisers go where they get the best return on their investment, and as long as billions of people keep watching YouTube videos, they'll keep advertising on the platform. So the only way things will change is if users get turned off so badly that they tune out. Following Facebook's Cambridge Analytica scandal, people deleted their accounts, Facebook's growth largely stalled in the U.S., and more young users have abandoned the platform. "YouTube has so far skated free of any similar scandals. But people are paying closer attention than ever before, and it's only a matter of time before the big scandal that actually starts driving users away," writes Rosoff.

Read more of this story at Slashdot.

15:43

How politics works, part 97: Telecoms industry throws a fundraiser for US senator night before he oversees, er, a telecoms privacy hearing [The Register]

Nothing like a little reminder of who's really in charge

The chairman of a US Senate committee mulling privacy protections will be thrown a reelection fundraiser by, er, the privacy-trampling telecoms industry literally the day before a key hearing.…

15:30

Apple To Close Retail Stores In the Patent Troll-Favored Eastern District of Texas [Slashdot]

An anonymous reader quotes a report from TechCrunch: Apple has confirmed its plans to close retail stores in the Eastern District of Texas -- a move that will allow the company to better protect itself from patent infringement lawsuits, according to Apple news sites 9to5Mac and MacRumors which broke the news of the stores' closures. Apple says that the impacted retail employees will be offered new jobs with the company as a result of these changes. The company will shut down its Apple Willow Bend store in Plano, Texas as well as its Apple Stonebriar store in Frisco, Texas, MacRumors reported, and Apple confirmed. These stores will permanently close up shop on Friday, April 12. Customers in the region will instead be served by a new Apple store located at the Galleria Dallas Shopping Mall, which is expected to open April 13. "The Eastern District of Texas had become a popular place for patent trolls to file their lawsuits, though a more recent Supreme Court ruling has attempted to crack down on the practice," the report adds. "The court ruled that patent holders could no longer choose where to file." One of the most infamous patent holding firms is VirnetX, which has won several big patent cases against Apple in recent years. A spokesperson for Apple confirmed the stores' closures, but wouldn't comment on the company's reasoning: "We're making a major investment in our stores in Texas, including significant upgrades to NorthPark Center, Southlake and Knox Street. With a new Dallas store coming to the Dallas Galleria this April, we've made the decision to consolidate stores and close Apple Stonebriar and Apple Willow Bend. All employees from those stores will be offered positions at the new Dallas store or other Apple locations."

Read more of this story at Slashdot.

15:22

Linux 5.0 Kernel Performance Is Sliding In The Wrong Direction [Phoronix]

With the Linux 5.0 kernel performance approaching the finish line, the past few days I've been ramping up my tests of this new kernel in our benchmarking farm. Unfortunately, when looking at the results at a macro level it's pointing towards Linux 5.0 yielding lower performance than previous kernel releases.

14:35

A Philosopher Argues That an AI Can't Be an Artist [Slashdot]

Sean Dorrance Kelly, a philosophy professor at Harvard, writes for MIT Technology Review: Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to. This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves. [...] My argument is not that the creator's responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way. It would be a mistake to interpret a machine's composition as part of such a vision of the world. The argument for this is simple. Claims like Kurzweil's that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms -- a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can't count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine's product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

Read more of this story at Slashdot.

14:20

Now you've read about the bonkers world of Elizabeth Holmes, own some Theranos history: Upstart's IT gear for sale [The Register]

Hard drives not included, for obvious reasons

Fancy owning a piece of Silicon Valley history? Hundreds of PCs, notebooks, and monitors used by infamous biotech cluster-fuck-up Theranos are set to be sold off following the $10bn-peak-valued biz's collapse.…

13:55

'Netflix Is the Most Intoxicating Portal To Planet Earth' [Slashdot]

Instead of trying to sell American ideas to a foreign audience, it's aiming to sell international ideas to a global audience. From an op-ed: In 2016, the company expanded to 190 countries, and last year, for the first time, a majority of its subscribers and most of its revenue came from outside the United States. To serve this audience, Netflix now commissions and licenses hundreds of shows meant to echo life in every one of its markets and, in some cases, to blend languages and sensibilities across its markets. In the process, Netflix has discovered something startling: Despite a supposed surge in nationalism across the globe, many people like to watch movies and TV shows from other countries. "What we're learning is that people have very diverse and eclectic tastes, and if you provide them with the world's stories, they will be really adventurous, and they will find something unexpected," Cindy Holland, Netflix's vice president for original content, told me. The strategy may sound familiar; Hollywood and Silicon Valley have long pursued expansion internationally. But Netflix's strategy is fundamentally different. Instead of trying to sell American ideas to a foreign audience, it's aiming to sell international ideas to a global audience. A list of Netflix's most watched and most culturally significant recent productions looks like a Model United Nations: Besides Ms. Kondo's show, there's the comedian Hannah Gadsby's "Nanette" from Australia; from Britain, "Sex Education" and "You"; "Elite" from Spain; "The Protector" from Turkey; and "Baby" from Italy. I'll admit there's something credulous and naive embedded in my narrative so far. Let me get this straight, you're thinking: A tech company wants to bring the world closer together? As social networks help foster misinformation and populist fervor across the globe, you're right to be skeptical. But there is a crucial difference between Netflix and other tech giants: Netflix makes money from subscriptions, not advertising.

Read more of this story at Slashdot.

13:15

Linus Torvalds on Why ARM Won't Win the Server Space [Slashdot]

Linus Torvalds: I can pretty much guarantee that as long as everybody does cross-development, the platform won't be all that stable. Or successful. Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud. That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment). Which means that you'll happily pay a bit more for x86 cloud hosting, simply because it matches what you can test on your own local setup, and the errors you get will translate better. This is true even if what you mostly do is something ostensibly cross-platform like just run perl scripts or whatever. Simply because you'll want to have as similar an environment as possible. Which in turn means that cloud providers will end up making more money from their x86 side, which means that they'll prioritize it, and any ARM offerings will be secondary and probably relegated to the mindless dregs (maybe front-end, maybe just static html, that kind of stuff). Guys, do you really not understand why x86 took over the server market? It wasn't just all price. It was literally this "develop at home" issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a "real server". And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over. Do you really not understand? This isn't rocket science. This isn't some made up story. This is literally what happened, and what killed all the RISC vendors, and made x86 be the undisputed king of the hill of servers, to the point where everybody else is just a rounding error. Something that sounded entirely fictional a couple of decades ago. Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit "hyperscaling" model is idiotic, when you don't have customers and you don't have workloads because you never sold the small cheap box that got the whole market started in the first place.

Read more of this story at Slashdot.

12:35

Norwich's Fortnite Live Festival Was a Complete Disaster [Slashdot]

An anonymous reader shares a report: A festival designed to recreate Fortnite on the outskirts of Norwich has, somewhat predictably, not lived up to expectations. Event organisers flogged 2500 tickets to kids and parents. Entry cost upwards of $15 and unlimited access wristbands a further $26. In return, families got what amounted to a few fairground attractions. Photos from the event show a climbing wall for three people, archery for four people, and four go-karts. An attraction dubbed a "cave experience" was a lorry trailer with tarpaulin over it. An indoors area where you could play actual Fortnite was probably the best thing there -- although it cost money to access and you had to queue to do so. So much for free-to-play. And all of that was if you could actually get into the event to start with. Hundreds of people were left queuing for hours due to staff shortages.

Read more of this story at Slashdot.

12:25

Entrust Datacard lined up to unburden Thales of nCipher biz as price for Gemalto buyout [The Register]

Profitable secure SIM firm in the bag by March, Thales hopes

French defence tech conglomerate Thales has flogged off its hardware security module biz nCipher Security, a sale demanded by competition regulators over Thales' buyout of Gemalto.…

11:57

Inside Elizabeth Holmes's Chilling Final Months at Theranos [Slashdot]

In the final months of Theranos, before the blood testing start-up was debunked and its founders charged with fraud, then-CEO Elizabeth Holmes brought a puppy, who she insisted was a wolf to others, with a penchant for peeing into the mix, according to Vanity Fair, which has detailed the chaos that ensued in the waning days of the startup, once valued at $9 billion. The 35-year-old Stanford University dropout has also met with filmmakers who she hopes would make a documentary about her "real story," the outlet reported. She also "desperately wants to write a book." An excerpt from the story: Holmes brushed it off when the scientists protested that the dog hair could contaminate samples. But there was another problem with Balto (name of the dog), too. He wasn't potty-trained. Accustomed to the undomesticated life, Balto frequently urinated and defecated at will throughout Theranos headquarters. While Holmes held board meetings, Balto could be found in the corner of the room relieving himself while a frenzied assistant was left to clean up the mess. [...] By late 2017, however, Holmes had begun to slightly rein in the spending. She agreed to give up her private-jet travel (not a good look) and instead downgraded to first class on commercial airlines. But given that she was flying all over the world trying to obtain more funding for Theranos, she was spending tens of thousands of dollars a month on travel. Theranos was also still paying for her mansion in Los Altos, and her team of personal assistants and drivers, who would become regular dog walkers for Balto. But there were few places she had wasted so much money as the design and monthly cost of the company's main headquarters, which employees simply referred to as "1701," for its street address along Page Mill Road in Palo Alto. 1701, according to two former executives, cost $1 million a month to rent. Holmes had also spent $100,000 on a single conference table. Elsewhere in the building, Holmes had asked for another circular conference room that the former employees said "looked like the war room from Dr. Strangelove," replete with curved glass windows, and screens that would come out of the ceiling so everyone in the room could see a presentation without having to turn their heads.

Read more of this story at Slashdot.

11:37

NVIDIA 390.116 Legacy & 410.104 Long-Lived Linux Drivers Released [Phoronix]

In addition to NVIDIA christening the 418 driver series as stable today with the GeForce GTX 1660 Ti release, they also issued updates for their 390 legacy driver series as well as the 410 long-lived driver release series...

11:29

Redis kills Modules' Commons Clause licensing... and replaces it with one of their own [The Register]

Confusion not severe enough to stop $60m Series E round

Redis Labs has jettisoned the Commons Clause software licence introduced last year for its Redis Modules, saying the earlier change had left some users "confused."…

11:16

Lessons From Six Software Rewrite Stories [Slashdot]

A new take on the age-old question: Should you rewrite your application from scratch, or is that "the single worst strategic mistake that any software company can make"? Turns out there are more than two options for dealing with a mature codebase. Herb Caudill: Almost two decades ago, Joel Spolsky excoriated Netscape for rewriting their codebase in his landmark essay Things You Should Never Do . He concluded that a functioning application should never, ever be rewritten from the ground up. His argument turned on two points: The crufty-looking parts of the application's codebase often embed hard-earned knowledge about corner cases and weird bugs. A rewrite is a lengthy undertaking that keeps you from improving on your existing product, during which time the competition is gaining on you. For many, Joel's conclusion became an article of faith; I know it had a big effect on my thinking at the time. In the following years, I read a few contrarian takes arguing that, under certain circumstances, it made a lot of sense to rewrite from scratch. For example: Sometimes the legacy codebase really is messed up beyond repair, such that even simple changes require a cascade of changes to other parts of the code. The original technology choices might be preventing you from making necessary improvements. Or, the original technology might be obsolete, making it hard (or expensive) to recruit quality developers. The correct answer, of course, is that it depends a lot on the circumstances. Yes, sometimes it makes more sense to gradually refactor your legacy code. And yes, sometimes it makes sense to throw it all out and start over. But those aren't the only choices. Let's take a quick look at six stories, and see what lessons we can draw.

Read more of this story at Slashdot.

10:35

Japan's Hayabusa 2 probe has got the horn for space rock Ryugu – a sampling horn, that is [The Register]

Asteroid bits collected. Next step, to hadouken a crater

Japan's Hayabusa 2 probe has successfully collected a sample from the surface of asteroid Ryugu following a careful descent last night.…

10:30

Slashdot Asks: What Are Some Programming Books You Wish You Had Read Earlier? [Slashdot]

A blog post from developer turned writer Marty Jacobs caught my attention earlier this morning. In the post, Jacobs has listed some of the programming books he says he had discovered and read much sooner. He writes, "There are so many programming books out there, sometimes it's hard to know what books are best. Programming itself is so broad and there are so many concepts to learn." You can check out his list here. I was curious what books would you include if you were to make a similar list?

Read more of this story at Slashdot.

09:50

Test Shows Facebook Begins Collecting Data From Several Popular Apps Seconds After Users Start Consuming Them. Company Also Collects Data of Non-Facebook Users. [Slashdot]

Millions of smartphone users confess their most intimate secrets to apps. Unbeknown to most people, in many cases that data is being shared with someone else: Facebook. [Editor's note: the link may be paywalled; here's an alternative source.] The Wall Street Journal reports: The social-media giant collects intensely personal information from many popular smartphone apps just seconds after users enter it, even if the user has no connection to Facebook, according to testing done by The Wall Street Journal. The apps often send the data without any prominent or specific disclosure, the testing showed. [...] In the case of apps, the Journal's testing showed that Facebook software collects data from many apps even if no Facebook account is used to log in and if the end user isn't a Facebook member. In the Journal's testing, Instant Heart Rate: HR Monitor, the most popular heart-rate app on Apple's iOS, made by California-based Azumio, sent a user's heart rate to Facebook immediately after it was recorded. Flo Health's Flo Period & Ovulation Tracker, which claims 25 million active users, told Facebook when a user was having her period or informed the app of an intention to get pregnant, the tests showed. Real-estate app Realtor.com, owned by Move, a subsidiary of Wall Street Journal parent News Corp, sent the social network the location and price of listings that a user viewed, noting which ones were marked as favorites, the tests showed. None of those apps provided users any apparent way to stop that information from being sent to Facebook. Update: New York Governor Cuomo has ordered probe into Facebook access to personal data.

Read more of this story at Slashdot.

09:48

ZX Spectrum Vega+ 'backer'? Nope, you're now a creditor – and should probably act fast [The Register]

Speak up and you might recover some of that £513k

People who paid for one of the infamous ZX Spectrum Vega+ handheld game consoles are being urged to register themselves as creditors of the company before a liquidator is appointed.…

09:33

CI/CD outfit Shippable shipped off to adopt the green tinge of JFrog [The Register]

Enterprise+: One toolkit to deliver them all

DevOps darling JFrog has snapped up cloud-based Continuous Integration and Continuous Delivery (CI/CD) outfit Shippable.…

09:30

NVIDIA 418.43 Stable Linux Driver Released, Includes GTX 1660 Ti Support [Phoronix]

As expected given today's GeForce GTX 1660 Ti launch, NVIDIA has released a new Linux graphics driver supporting the 1660 Ti as well as the RTX 2070 with Max-Q Design and RTX 2080 with Max-Q Design, among other changes...

09:00

GCC 8.3 Released With 153 Bug Fixes [Phoronix]

While the GCC 9 stable compiler release is a few weeks away in the form of GCC 9.1, the GNU Compiler Collection is up to version 8.3.0 today as their newest point release to last year's GCC 8 series...

08:40

Trust the public cloud Big Three to make non-volatile storage volatile [The Register]

NVMe drives speed VMs, but be warned – it ain't persistent

AWS and Google Cloud virtual machine instances – and as of this month, Azure's – have NVMe flash drive performance, but user be warned: drive contents are wiped when the VMs are killed.…

07:48

Infosec in spaaace! NCC and Surrey Uni to pore over satellite security [The Register]

There's a PhD position in it too, if you want to get involved

NCC Group and the University of Surrey have set up a "Space Cyber Security Research Partnership" to investigate the security issues faced by satellites.…

07:19

Qt Publishes A 2019 Public Roadmap: More Work On WebAssembly, Tooling [Phoronix]

The Qt Company has published a 2019 roadmap of sorts for areas they plan on focusing their resources this 2019 calendar year...

07:16

Saturday Morning Breakfast Cereal - Psyops [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Also cats love you with an undying loyalty but don't know how to express it.


Today's News:

07:04

Google: Hmm, this government regulation stuff looks important. Let's stick some more lobbyists on that [The Register]

Ad giant plans reshuffle to focus on privacy, anti-trust – reports

Facing down an increased interest in tech regulation, Google is said to be rejigging its global lobbying efforts and upping its focus on privacy and competition.…

06:20

AMDGPU Squeezes In Revised Context Priority Handling For Linux 5.1 [Phoronix]

With the Linux 5.1 kernel cycle soon to kick-off, an early batch of fixes for the AMDGPU DRM driver and other fixes were sent in on Thursday to queue along with all of the new functionality being staged in DRM-Next...

06:17

HPE's cold storage digit: 2% growth better than a kick in the teeth – but it's no Dell EMC [The Register]

Rest of the portfolio couldn't keep up with Nimble

HPE storage revenues – like NetApp's – grew just 2 per cent year-on-year in the firm's first 2019 quarter, with Nimble all-flash arrays leading the charge amble.…

05:07

GeForce GTX 1660 Ti Launch Today - Supported By The NVIDIA Linux Driver, No Nouveau Yet [Phoronix]

After weeks of leaks, the GeForce GTX 1660 Ti is expected to be formally announced in just a few hours. This is a ~$300 Turing graphics card but without any ray-tracing support as so far has been common to all Turing graphics cards. The GTX 1600 series family is expected to expand as well in the weeks ahead...

05:02

The record shows I took the blows, and did it... Huawei: IT titan will start tackling GCHQ security gripes from June [The Register]

The iceberg has begun to change course

Stinging from British criticism over its snail's pace, Huawei has promised to start addressing complaints about its products' security, raised Blighty's spy agency, GCHQ, by June.…

04:47

Sueballs at the ready? Google promises end to forced arbitration after wave of staff protests [The Register]

Search giant lifts ban on out-of-court talks, class-action suits

Google has said it will end forced arbitration next month and lift a ban on class-action suits after intense pressure from staffers.…

03:48

Lunar lander's brief jaunt will place Israel as fourth country to make soft landing on Moon [The Register]

If it works. Comms sat also along for ride, but there are loads already so...

SpaceX has sent the first privately funded lunar lander on its way to the Moon following an evening launch from Canaveral Air Force station.…

03:27

GCC 9 Compiler Picks Up Official Support For The Arm Neoverse N1 + E1 [Phoronix]

Earlier this week Arm announced their next-generation Neoverse N1 and E1 platforms with big performance potential and power efficiency improvements over current generation Cortex-A72 processor cores. The GNU Compiler Collection (GCC) ahead of the upcoming GCC9 release has picked up support for the Neoverse N1/E1...

03:11

Artificial Intelligence: You know it isn't real, yeah? [The Register]

It's not big and it's not clever. Well, not clever, anyway

Something for the Weekend, Sir?  "Where's the intelligence?" cried a voice from the back.…

02:06

Not so smart after all: A techie's tale of toilet noise horror [The Register]

'The perils of wrist-based motion sensors'

Ah the perils of a connected society were evidenced once again this week when some techies we know took on a pimply faced, smartwatch clad youth as an apprentice.…

01:46

Sailfish OS: Security and Data Privacy [Jolla Blog]

Mobile World Congress is back again! Like every single year during the Jolla journey, we are excited to take part in this event. We have had great experiences in the past MWC’s, our main drivers for attending are the current and relevant topics discussed during the congress. One of this year’s core themes is Digital Trust; “Digital trust analyses the growing responsibilities required to create the right balance with consumers, governments and regulators.” It makes us happy that these topics are being discussed, especially since several scandals have recently affected trust in digital solutions.

At Jolla we work constantly towards providing a secure and transparent solution. Our value towards our customer’s privacy is reflected in our values and actions. Back in May of 2018 our CEO Sami Pienimäki wrote a blog post on the GDPR laws passed within the European Union and stated the cornerstones on how Jolla views data privacy. This stand on privacy is not rocket science – the core idea is to respect our customers’ privacy and allow them to be in control of their data.

 Jolla’s Stand on Privacy

  1. We collect only a minimum amount of information, only what is needed to run our services.
  2. We do not monetize your data, or give your data to third parties without your express consent: we only use it to provide our services to you.
  3. We do not collect any data without your consent.
  4. And last but not least: we care are about privacy on all levels

To support our Stand on Privacy we recently attended the Computers, Privacy & Data Protection (CPDP) 2019 conference in Brussels. CPDP is a “world-leading multidisciplinary conference that offers the cutting edge in legal, regulatory, academic and technological development in privacy and data protection.”  The event took place right after the celebration of International Data Privacy Day on the 28th of January. During the event, Jolla was invited to attend a panel focused on privacy and design in mobile development.

Here are a few insights from Vesa-Matti Hartikainen, Program Manager at Jolla, who was the speaker at the panel:

“Jolla was invited to attend the panel titled “Implementing Privacy by Design into mobile development, obstacles and opportunities. A developer perspective on data protection by design” at CPDP2019 conference. As I co-ordinated the work of Jolla’s GDPR project I was chosen to represent our company.

In the panel there were representatives from academia, other companies and data protection authorities. During the panel an interesting question came up: “What else (than GDPR) is necessary to finally get these solutions (Privacy by Design) into broad deployment?” The answer I gave came from my experience while developing Sailfish OS. It is very difficult to compete against “free” when the competitor OS is essentially free for the device vendors. At the same time the OS vendor gets its investment back by having their apps and services being prominently present in the device, which allows them to profile the users, collect data and in the end mostly monetise from the ads on the app and from the ads on their other services. GDPR has already had some impact on the situation as it severely limits what organisations are able to do with the data and exposes what data they are collecting. It is valuable to have regulations in place and having better visibility on what information is collected from the users. The fines and controls are slowly impacting the big players. GDPR has shown some promise here in Europe, but it would be very good for privacy if other major markets should follow the lead.

After the panel I had several interesting discussions with conference delegates and it’s nice to see that privacy is a big concern here and actions are taken to protect peoples’ privacy. This concern was shown not only through the discussion but also by the reports presented during the event. A clear example is the report “Every Step you take: How deceptive design let’s Google track users 24/7”  presented by the Norwegian Consumer Council and funded by the Norwegian Government. This report analyses  how Google gets permissions to track the location using different technologies and through deceptive design practices.

Data privacy has always been in the core of Sailfish OS and in the way Jolla operates, and we believe this is one of the main reasons why our corporate customers and community believe in us. In our latest release, Sailfish 3, security has been a special focus and it incorporates several features to improve device security and keeping the user data and communications private. The security related features we’ve been developing into Sailfish 3 include among other things: encrypted user data and communication, new security architecture, remote lock and wipe, fingerprint support, VPN and specifically for corporate users: Mobile Device Management.”

As Vesa-Matti already mentions above, it is important to discuss about privacy and security. Awareness is growing, and governments are taking action into protecting the privacy and security of their citizens and businesses. As from our trench, we continue working on providing privacy for our customers and constantly improving the security in our solutions. We hope to have good discussions in Mobile World Congress 2019.

If you are interested in our solution you can still book a meeting with us through partners@jolla.com. If you are part of the community in Barcelona we hope to see you in our Community Meet Up hosted with Planet Computers on Sunday 24th at 5pm, in restaurant Bodega La Puntual.

The post Sailfish OS: Security and Data Privacy appeared first on Jolla Blog.

01:39

The Most Interesting Highlights To The Linux 5.0 Kernel [Phoronix]

With the Linux 5.0 kernel due out within the next week or two, here's a look back at the biggest end-user facing changes for this kernel release that started out as Linux 4.21...

01:29

Using the NetworkManager’s DNSMasq plugin [Fedora Magazine]

The dnsmasq plugin is a hidden gem of NetworkManager. When using the plugin, instead of using whatever DNS nameserver is doled out by DHCP, NetworkManager will configure a local copy of dnsmasq that can be customized.

You may ask, why would you want to do this? For me personally, I have two use cases:

First, on my laptop, I run a full OpenShift installation for testing purposes. In order to make this work, I really need to be able to add DNS records. I can run a local dnsmasq without NetworkManager, but this config is easier than managing my own.

Second, when I’m at home, I still want to use my home network’s DNS while on VPN. Many VPNs are configured to only route specific traffic through the VPN tunnel and leave my default route in place. This means I can access my local network’s printer and still connect to resources on the VPN.

This is very nice, as it means I can still access my network printer or listen to music from my media server while doing work. However, the VPN connection overwrites my resolv.conf with DNS servers from the VPN network. Therefore, my home network’s DNS is no longer accessible.

The dnsmasq plugin solves this by running a local dnsmasq server that is controlled by NetworkManager. My resolv.conf always points to localhost. For records defined locally (e.g. for my OpenShift Cluster), dnsmasq resolves these correctly. Using more advanced dnsmasq config, I can selectively forward requests for certain domains to specific servers (e.g. to always correctly resolve my home network hosts). And for all other requests, dnsmasq will forward to the DNS servers associated with my current network or VPN.

Here’s how to configure it in Fedora 29:

For some context, my domain on my laptop is called ‘laplab’ and my home domain is ‘.homelab’. At home my DNS server is 172.31.0.1. For DNS entries in laplab, most of those are defined in /etc/hosts. dnsmasq can then slurp them up. I also have some additional DNS entries defined for a wildcard DNS and some aliases.

Below are the five files that need to be added. The files in dnsmasq.d could be combined, but are split up to hopefully better show the example.

  • /etc/NetworkManager/conf.d/00-use-dnsmasq.conf
  • /etc/NetworkManager/dnsmasq.d/00-homelab.conf
  • /etc/NetworkManager/dnsmasq.d/01-laplab.conf
  • /etc/NetworkManager/dnsmasq.d/02-add-hosts.conf
  • /etc/hosts
# /etc/NetworkManager/conf.d/00-use-dnsmasq.conf
#
# This enabled the dnsmasq plugin.
[main]
dns=dnsmasq
# /etc/NetworkManager/dnsmasq.d/00-homelab.conf
#
# This file directs dnsmasq to forward any request to resolve
# names under the .homelab domain to 172.31.0.1, my 
# home DNS server.
server=/homelab/172.31.0.1
# /etc/NetworkManager/dnsmasq.d/01-laplab.conf
# This file sets up the local lablab domain and 
# defines some aliases and a wildcard.
local=/laplab/

# The below defines a Wildcard DNS Entry.
address=/.ose.laplab/192.168.101.125

# Below I define some host names.  I also pull in   
address=/openshift.laplab/192.168.101.120
address=/openshift-int.laplab/192.168.101.120
# /etc/NetworkManager/dnsmasq.d/02-add-hosts.conf
# By default, the plugin does not read from /etc/hosts.  
# This forces the plugin to slurp in the file.
#
# If you didn't want to write to the /etc/hosts file.  This could
# be pointed to another file.
#
addn-hosts=/etc/hosts
# /etc/hosts
#  
# The hostnames I define in that will be brought in and resolvable
# because of the config in the 02-add-hosts.conf file. 
#
127.0.0.1   localhost localhost.localdomain 
::1         localhost localhost.localdomain 

# Notice that my hosts be in the .laplab domain, like as configured 
# in 01-laplab.conf file
192.168.101.120  ose-lap-jumphost.laplab
192.168.101.128  ose-lap-node1.laplab

# Name not in .laplab will also get picked up.  So be careful 
# defining items here.
172.31.0.88     overwrite.public.domain.com

After all those files are in place, restart NetworkManager with systemctl restart NetworkManager. If everything is working right, you should see that your resolv.conf points to 127.0.0.1 and a new dnsmasq process spawned.

If everything is working right, you should see that your resolv.conf points to 127.0.0.1 and a new dnsmasq process spawned.

$ ps -ef | grep dnsmasq
dnsmasq   1835  1188  0 08:01 ?        00:00:00 /usr/sbin/dnsmasq --no-resolv 
--keep-in-foreground --no-hosts --bind-interfaces --pid-file=/var/run/NetworkManager/dnsmasq.pid 
--listen-address=127.0.0.1 --cache-size=400 --clear-on-reload --conf-file=/dev/null 
--proxy-dnssec --enable-dbus=org.freedesktop.NetworkManager.dnsmasq 
--conf-dir=/etc/NetworkManager/dnsmasq.d
$ cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 127.0.0.1
$ host ose-lap-jumphost.laplab
ose-lap-jumphost.laplab has address 192.168.101.120

This configuration will survive reboots and, in my testing, works with almost every network and VPN I’ve tried it with.

00:52

OK, team, we've got the big demo tomorrow and we're feeling confident. Let's reboot the servers [The Register]

Uhhh... we can't log in. Doughnut, anyone?

On Call  After a long, hard week, what better way to start Friday than with a dose of On Call, El Reg's weekly column for tech traumas, mishaps and eureka moments.…

00:01

What's the frequency, KeNNeth? Neural nets trained to tune in on radar signals to boost future mobe broadband [The Register]

It's time we rise up against these AI overlords and overthrow their useful technologies

Neural networks have proven surprisingly adept at detecting radar signals – and could help the US Navy and civilian mobile networks better share their overlapping radio spectrum.…

Thursday, 21 February

23:06

EPIC demand: It's time for Google to fly the Nest after 'forgetting' to mention home alarm hub has built-in mic [The Register]

Ad giant must divorce IoT subsidiary, privacy warriors tell sleepy watchdog

Following Google's acknowledgement that it made a mistake by failing to mention that its Nest Guard alarm hub includes a microphone, the Electronic Privacy Information Center (EPIC) has asked the US Federal Trade Commission (FTC) to force the ad biz to sell its Nest division and surrender data snarfed from Nest customers.…

22:48

Raspberry Pi Begins Rolling Out The Linux 4.19 Kernel [Phoronix]

The Raspberry Pi folks have been working the past few months on upgrading their kernel in moving from Linux 4.14 to 4.19. That roll-out has now begun...

22:01

Eggheads want YOU to name Jupiter's five newly found moons ‒ and yeah, not so fast with Moony McMoonface [The Register]

Looks like someone's thought ahead this time

The Carnegie Institution for Science, a research hub headquartered in America's capital, is asking for the public’s help to name five of Jupiter’s newly discovered moons.…

22:00

Phoronix Test Suite 8.6.1 Released For Open-Source, Cross-Platform Benchmarking [Phoronix]

Phoronix Test Suite 8.6.1 is now available as a minor update over Phoronix Test Suite 8.6-Spydeberg that shipped at the start of February...

17:58

Neri, Neri, nerr-nerr: Wall St smiles on HPE despite slip in hybrid IT, compute sales [The Register]

CEO Antonio boasts of big earnings to come

HPE got a boost from Wall Street Thursday even after falling short on revenues for its latest financial quarter.…

16:30

'We don't want a camera in everyone's living room' says bloke selling cameras in living rooms. Zuckerberg, you moron [The Register]

Also: Letting people pay to stop FB snooping wouldn't be fair on the poor, apparently

Facebook is not going to give people the option to pay it to stop gathering and selling their private information because it wouldn't be fair to those that can't afford it.…

15:45

You're on a Huawei to Hell, US Sec State Pompeo warns allies: Buy Beijing's boxes, no more intelligence for you [The Register]

Don't need reason, don't need rhyme. Ain't nothing I would rather do: going down, party time

US Secretary of State Mike Pompeo has confirmed that Uncle Sam will no longer provide top-secret intelligence to countries that use Huawei equipment in their core networks.…

15:00

Early Intel i965 vs. Iris Gallium3D OpenGL Benchmarks On UHD Graphics 620 With Mesa 19.1 [Phoronix]

With yesterday's somewhat of a surprise announcement that Intel is ready to mainline their experimental Iris Gallium3D driver as their "modern" Linux OpenGL driver with numerous design advantages over their long-standing "classic" i965 Mesa driver, here are some fresh benchmarks of that latest driver compared to the current state of their OpenGL driver in Mesa 19.1.

14:51

WTF PDF: If at first you don't succeed, you may be Adobe re-patching its Acrobat, Reader patches [The Register]

Plus: How Microsoft Edge helps Facebook Flash files dodge click-to-play rules in Edge

Adobe is taking a second crack at patching security bugs in its Acrobat and Reader PDF apps.…

14:10

Oracle sued for $4.5m after ERP system delivery date 'moved from 2015 to 2016, then 2017, then... er, never' [The Register]

Lawsuit accuses Big Red of fraud, breach of contract

Software giant Oracle was sued on Wednesday by Worth & Company, a Pennsylvania-based mechanical contractor, over a failed enterprise resource planning (ERP) software deal.…

13:26

Fancy a .dev domain? They were $12,500 a pop from Google. Now, $1,000. Soon, $17.50. And you may want one [The Register]

Meanwhile, .gay comes out of the commercial closet

Google has launched a new internet extension specifically for developers but if you want to get a good name, you're going to have to pay for it.…

13:10

Deton-8. Blastobox-3. Demo-1... One of these is the name of a SpaceX crew capsule test now due to launch in March [The Register]

As experts worry about the potential for rapid unscheduled in-flight rocket disassembly

NASA this week set a date for the launch of the much-delayed Demo-1 – the first test flight of SpaceX’s Dragon capsule that will, fingers crossed, eventually ferry humans to the International Space Station.…

11:47

Intel Iris Gallium3D Driver Merged To Mainline Mesa 19.1 [Phoronix]

Well that sure didn't take long... Less than 24 hours after the merge request to mainline the Intel "Iris" Gallium3D driver was sent out, it's now been merged into the mainline code-base! The Intel Gallium3D driver is now in Mesa Git for easy testing of their next-generation OpenGL Linux driver...

11:26

Intel's Shiny Vulkan Overlay Layer Lands In Mesa 19.1 - Provides A HUD With Driver Stats [Phoronix]

As some more exciting open-source Intel Linux graphics news this week besides their new merge request to mainline the Iris Gallium3D driver, over in the Vulkan space they have merged today their overlay layer that provides a heads-up display of sorts for their Linux "ANV" driver...

10:01

Big names hurl millions of pounds at scheme to hoist UK's AI knowhow [The Register]

We're Europe's tech hub, crows minister, but investment weedy compared to the US and China

Google's DeepMind is among 11 companies to fund artificial intelligence masters degrees in the UK under a government-backed range of training programmes, including fellowships and PhD centres.…

09:33

Black-hat sextortionists required: Competitive salary and dental plan [The Register]

Cybercrims aren't just raking it in – they're dishing it out too

Extortionists are promising salaries of more than a quarter of a million pounds to skilled infosec folk willing to put on a black hat, according to research outfit Digital Shadows.…

09:00

Librem 5 Smartphone Specs Firmed Up, But Now Delayed To Q3 [Phoronix]

The Librem 5 Linux-powered smartphone originally planned to ship in January 2019 but last year was delayed to April to allow for more time to finish up work on the hardware and software. Today Purism is announcing that the Librem 5 is being delayed to "Q3" but they have been making progress particularly on the hardware side...

08:48

Linux love hits Windows 10 19H1 amid a second round of zombie slaying [The Register]

For the BOFHs: Admin Center preview loaded with Software Defined Networking goodness

In a busy week for Windows Insiders, Fast-Ring fans got a fresh build of Windows last night, hot on the heels of a new preview of the Windows Admin Center.…

08:43

Clear Linux Has A Goal To Get 3x More Upstream Components In Their Distro [Phoronix]

For those concerned that running Clear Linux means less available packages/bundles than the likes of Debian, Arch Linux, and Fedora with their immense collection of packaged software, Clear has a goal this year of increasing their upstream components available on the distribution by three times...

08:20

China’s CRISPR twins might have had their brains inadvertently enhanced [Top News - MIT Technology Review]

New research suggests that a controversial gene-editing experiment to make children resistant to HIV may also have enhanced their ability to learn and form memories.

07:59

07:46

Oracle: Major ad scam 'DrainerBot' is rinsing Android users of their battery life and data [The Register]

App piracy fighter Tapcore strenuously denies involvement

A major ad fraud operation could be sucking your phone of juice and using up more than 10GB of data a month by downloading hidden vids, Oracle has claimed.…

07:18

GNOME 3.32 Beta 2 Released [Phoronix]

Released earlier this month was the GNOME 3.32 beta which also marked the feature/UI/API freeze. Out today is the second beta for the upcoming GNOME 3.32 and now the string freeze is also in effect...

07:04

Fedora 30's Slick Boot Process Is Ready To Go [Phoronix]

The work led by Red Hat's Hans de Goede the past few Fedora release cycles has culminated with a great out-of-the-box boot experience for the upcoming Fedora 30...

06:59

Northern UK smart meter rollout is too slow, snarls MPs' committee [The Register]

But... but British Gas customers are making cost savings, though

The British government is "sugar coating" its smart meter project and pretending that "everything will turn out alright in the end", according to Parliament's Business, Energy and Industrial Strategy Committee (BEIS).…

06:28

DRAM, it feels good to be a gangsta: Only Intel flash revenues on the rise after brutal quarter [The Register]

Worse to come as market doldrums deepen

An abrupt quarter-on-quarter revenue cliff drop affected all the main flash vendors, except Intel, which saw revenues rise despite falling prices.…

05:58

There's no 'My' in Office, Microsoft insists with new productivity hub [The Register]

If you like that then you'll just LOVE our 365 range

Microsoft has updated the My Office app and would like to remind users that there's a free, online version of the suite.…

05:51

Qt Creator 4.9 Beta Brings Expanded LSP Support, Perf Profiling, C++ Improvements [Phoronix]

The Qt Company has today issued their first public beta/test release of the upcoming Qt Creator 4.9 integrated development environment...

05:29

Data breach rumours abound as UK Labour Party locks down access to member databases [The Register]

Breakaway MPs accused of making off with info

The UK's Labour Party has been forced to lock down access to membership databases and campaign tools over concerns the info was being sucked up by breakaway MPs, in a possible breach of data protection laws.…

05:00

A philosopher argues that an AI can’t be an artist [Top News - MIT Technology Review]

Creativity is, and always will be, a human endeavor.

04:59

Welcome to the sunlit uplands of HTTP/2, where a naughty request can send Microsoft's IIS into a spin [The Register]

It's patching time again for Windows Server 2016 and Windows 10

Updated  Oops! Microsoft has published an advisory on a bug in its Internet Information Services (IIS) product that allows a malicious HTTP/2 request to send CPU usage to 100 per cent.…

04:25

Samsung pulls sheets off costly phone-cum-fondleslab Galaxy Fold – and a hefty 5G monster [The Register]

Innovation? In my smartphone market? It's more likely than you think

Some day in the future you'll have a piece of material you can fold neatly away in your pocket, a canvas that just happens to be a communication and information device. Until that day, "foldable" phones will be transitional things, reminding us how far short we fall of the ideal.…

04:04

Software development and deployment? Yeah, we can help you with that... [The Register]

Just one week to save a bundle with our early bird tickets

Events  If you're gearing up supercharge your software development and deployment operations, whether by adopting DevOps, getting serious about containers, or adding serverless into the mix, you should be joining us at Continuous Lifecycle London in May.…

03:55

BMW Volleys Open-Source "RAMSES" Distributed 3D Rendering System [Phoronix]

For those interested in distributed 3D rendering, the developers at BMW recently received clearance to open-source RAMSES, a 3D rendering system optimized for bandwidth and resource efficiency...

03:40

UK.gov pens Carillion-proofing playbook: Let's run pilots of work before we outsource it, check firms' finances [The Register]

Also makes vendor-luring pledge to take its fair share of risk

The UK government has outlined a series of safety nets designed to prevent another Carillion disaster in what it is calling an "Outsourcing Playbook".…

03:27

Mesa 19.1 Panfrost Driver Gets Pantrace & Pandecode Support To Help Reverse Engineering [Phoronix]

Since being added to Mesa 19.1 at the start of this month, the Panfrost driver has continued speeding along with bringing up this ARM Mali T600/T700/T860 open-source graphics driver support. The latest batch of code was merged overnight, including support for some reverse-engineering helpers...

03:06

Bored bloke takes control of British Army 'psyops' unit's Twitter [The Register]

Great recruiting tool there, folks

A crafty joker seized control of the British Army's "influence and outreach" Twitter account – and labelled the military unit "fun sponges" when they tried to get it back.…

02:33

The bigger they are, the harder they fall: Peak smartphone hits Apple, Samsung the worst [The Register]

Chinese upstarts fill in the gaps

The two biggest brands in the West are the two biggest losers as the smartphone slump continues, analyst Gartner has found.…

02:02

Go, go, Gadgets Boy! 'Influencer' testing 5G for Vodafone finds it to be slower than 4G [The Register]

Hilarity ensues

Big companies love to have social media "influencers" touting their wares – time-rich millennials who have turned product placement into a moderately lucrative lifestyle, often thanks to an agency.…

01:31

Preliminary Support Allows Linux KVM To Boot Xen HVM Guests [Phoronix]

As one of the most interesting patch series sent over by an Oracle developer in quite a while at least on the virtualization front, a "request for comments" series was sent out on Wednesday that would enable the Linux Kernel-based Virtual Machine (KVM) to be able to boot Xen HVM guests...

00:58

Google emits a beta of Cloud Service Platform to entice hold-outs with hybrid goodness [The Register]

Where would madam like madam's Kubernetes? Cloud? On-premises? Both?

Google's hybrid Cloud Services Platform (CSP) emerged blinking into the light today, in beta form at least.…

00:02

NASA boffins show Moon water supply could – er, this can't be right? – come from the Sun [The Register]

All rocks can produce water if irradiated in the right way

Thirsty astronauts living on the Moon may be able to extract water from the barren body, thanks to the power of the solar wind, according to NASA.…

Wednesday, 20 February

23:26

Profs prep promising privacy-protecting proxy program... Yes, it is possible to build client-server code that safeguards personal info [The Register]

Software framework teases shortcut to GDPR compliance

Computer science boffins from Harvard and MIT have developed a software framework for building web services that respect privacy, provided app developers don't mind a minor performance hit.…

23:12

AMDGPU Has Late Fixes For Linux 5.0: Golden Register Update For Vega 20, Display Fixes [Phoronix]

There are some last minute changes to the AMDGPU Direct Rendering Manager (DRM) driver for the upcoming Linux 5.0 kernel release...

23:03

Fool ML once, shame on you. Fool ML twice, shame on... the AI dev? If you can hoodwink one model, you may be able to trick many more [The Register]

Some tips on how to avoid miscreants deceiving your code

Adversarial attacks that trick one machine-learning model can potentially be used to fool other so-called artificially intelligent systems, according to a new study.…

22:04

Check yo self before you HyperWreck yo self: Cisco fixes gimme-root holes in HyperFlex, plus more security bugs [The Register]

Patches available now spread across more than a dozen advisories

Cisco emitted on Wednesday a bunch of security updates that, your support contract willing, you should test and roll out to installations as soon as possible.…

22:01

D-Bus Broker 18 Released While BUS1 In-Kernel IPC Remains Stalled [Phoronix]

Version 18 of D-Bus Broker has been released, the D-Bus message bus implementation designed for high performance and better reliability compared to the D-Bus reference implementation while sticking to compatibility with the original specification...

19:58

Where's Zero Cool when you need him? Loose chips sink ships: How hackers could wreck container vessels [The Register]

Or Acid Burn? Or Lord Nikon? Weak IT security may end in disaster at sea... one day

Poorly maintained IT systems on container ships are leaving the vessels open to cyber-attack and catastrophe, it is claimed.…

15:19

Intel Ready To Add Their Experimental "Iris" Gallium3D Driver To Mesa [Phoronix]

For just over the past year Intel open-source driver developers have been developing a new Gallium3D-based OpenGL driver for Linux systems as the eventual replacement to their long-standing "i965 classic" Mesa driver. The Intel developers are now confident enough in the state of this new driver dubbed Iris that they are looking to merge the driver into mainline Mesa proper...

13:11

KASAN Spots Another Kernel Vulnerability From Early Linux 2.6 Through 4.20 [Phoronix]

The Kernel Address Sanitizer (KASAN) that detects dynamic memory errors within the Linux kernel code has just picked up another win with uncovering a use-after-free vulnerability that's been around since the early Linux 2.6 kernels...

11:45

The first privately funded trip to the moon is about to launch [Top News - MIT Technology Review]

After failing to claim the Lunar X Prize (which, to be fair, everyone did), the Israeli firm SpaceIL could have a rover on lunar soil in a little over a month.

11:00

AMD Hiring Ten More People For Their Open-Source/Linux Driver Team [Phoronix]

If you are passionate about Linux/open-source and experienced with the 3D graphics programming and/or compute shaders, AMD is looking to expand their open-source/Linux driver team by about ten people...

09:26

Extensive Benchmarks Looking At AMD Znver1 GCC 9 Performance, EPYC Compiler Tuning [Phoronix]

With the GCC 9 compiler due to be officially released as stable in the next month or two, we've been running benchmarks of this near-final state to the GNU Compiler Collection on a diverse range of processors. In recent weeks that has included extensive compiler benchmarks on a dozen x86_64 systems, POWER9 compiler testing on the Talos II, and also the AArch64 compiler performance on recent releases of GCC and LLVM Clang. In this latest installment of our GCC 9 compiler benchmarking is an extensive look at the AMD EPYC Znver1 performance on various releases of the GCC compiler as well as looking at various optimization levels under this new compiler on the Znver1 processor.

08:00

TuxClocker: Another GPU Overclocking GUI For Linux [Phoronix]

Adding to the list of third-party GPU overclocking utilities for Linux is TuxClocker, a Qt5-based user-interface currently with support for NVIDIA graphics cards and experimental support for AMD GPUs...

07:38

OpenSUSE Leap 15.1 Reaches Beta Milestone [Phoronix]

This week openSUSE Leap 15.1 reached the beta stage for this Linux distribution derived from the same sources as SUSE Linux Enterprise 15 SP1...

07:14

Arm Neoverse N1 & E1 Platforms Announced For Cloud To Edge Computing [Phoronix]

Arm announced today their Neoverse N1 7nm platform catering towards cloud workload performance as well as the Neoverse E1 platform for high-efficiency infrastructure...

06:36

Saturday Morning Breakfast Cereal - Fed [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Shame on the three of you who enjoyed this joke.


Today's News:

04:53

Gallium Nine With NIR Is Now Running Most D3D9 Games "Flawlessly" [Phoronix]

Towards the beginning of the month we reported on the Gallium Nine state tracker working on NIR support as an alternative to its original focus on the common TGSI intermediate representation to Gallium3D. That NIR-ified version of Gallium Nine is now working and beginning to run most Direct3D 9 games fine...

01:00

Set up two-factor authentication for SSH on Fedora [Fedora Magazine]

Every day there seems to be a security breach reported in the news where our data is at risk. Despite the fact that SSH is a secure way to connect remotely to a system, you can still make it even more secure. This article will show you how.

That’s where two-factor authentication (2FA) comes in. Even if you disable passwords and only allow SSH connections using public and private keys, an unauthorized user could still gain access to your system if they steal your keys.

With two-factor authentication, you can’t connect to a server with just your SSH keys. You also need to provide the randomly generated number displayed by an authenticator application on a mobile phone.

The Time-based One-time Password algorithm (TOTP) is the method shown in this article. Google Authenticator is used as the server application. Google Authenticator is available by default in Fedora.

For your mobile phone, you can use any two-way authentication application that is compatible with TOTP. There are numerous free applications for Android or IOS that work with TOTP and Google Authenticator. This article uses FreeOTP as an example.

Install and set up Google Authenticator

First, install the Google Authenticator package on your server.

$ sudo dnf install -y google-authenticator

Run the application.

$ google-authenticator

The application presents you with a series of questions. The snippets below show you how to answer for a reasonably secure setup.

Do you want authentication tokens to be time-based (y/n) y
Do you want me to update your "/home/user/.google_authenticator" file (y/n)? y

The app provides you with a secret key, verification code, and recovery codes. Keep these in a secure, safe location. The recovery codes are the only way to access your server if you lose your mobile phone.

Set up mobile phone authentication

Install the authenticator application (FreeOTP) on your mobile phone. You can find it in Google Play if you have an Android phone, or in the iTunes store for an Apple iPhone.

A QR code is displayed on the screen. Open up the FreeOTP app on your mobile phone. To add a new account, select the QR code shaped tool at the top on the app, and then scan the QR code. After the setup is complete, you’ll have to provide the random number generated by the authenticator application every time you connect to your server remotely.

Finish configuration

The application asks further questions. The example below shows you how to answer to set up a reasonably secure configuration.

Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) y
By default, tokens are good for 30 seconds. In order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. If you experience problems with poor time synchronization, you can increase the window from its default size of +-1min (window size of 3) to about +-4min (window size of 17 acceptable tokens).
Do you want to do so? (y/n) n
If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y

Now you have to set up SSH to take advantage of the new two-way authentication.

Configure SSH

Before completing this step, make sure you’ve already established a working SSH connection using public SSH keys, since we’ll be disabling password connections. If there is a problem or mistake, having a connection will allow you to fix the problem.

On your server, use sudo to edit the /etc/pam.d/sshd file.

$ sudo vi /etc/pam.d/ssh

Comment out the auth substack password-auth line:

#auth       substack     password-auth

Add the following line to the bottom of the file.

auth sufficient pam_google_authenticator.so

Save and close the file. Next, edit the /etc/ssh/sshd_config file.

$ sudo vi /etc/ssh/sshd_config

Look for the ChallengeResponseAuthentication line and change it to yes.

ChallengeResponseAuthentication yes

Look for the PasswordAuthentication line and change it to no.

PasswordAuthentication no

Add the following line to the bottom of the file.

AuthenticationMethods publickey,password publickey,keyboard-interactive

Save and close the file, and then restart SSH.

$ sudo systemctl restart sshd

Testing your two-factor authentication

When you attempt to connect to your server you’re now prompted for a verification code.

[user@client ~]$ ssh user@example.com
Verification code:

The verification code is randomly generated by your authenticator application on your mobile phone. Since this number changes every few seconds, you need to enter it before it changes.

If you do not enter the verification code, you won’t be able to access the system, and you’ll get a permission denied error:


[user@client ~]$ ssh user@example.com
Verification code:
Verification code:
Verification code:
Permission denied (keyboard-interactive).
[user@client ~]$

Conclusion

By adding this simple two-way authentication, you’ve now made it much more difficult for an unauthorized user to gain access to your server.

Tuesday, 19 February

07:46

Saturday Morning Breakfast Cereal - Life Online [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Actually, the vast majority of early Internet time was spent in an AOL chatroom pretending to be a sexy vampire.


Today's News:

Monday, 18 February

17:00

Autoscaling Mesos Clusters with Clusterman [Yelp Engineering and Product Blog]

Here at Yelp, we host a lot of servers in the cloud. In order to make our website more reliable—yet cost-efficient during periods of low utilization—we need to be able to autoscale clusters based on usage metrics. There are quite a few existing technologies for this purpose, but none of them really meet our needs of autoscaling extremely diverse workloads (microservices, machine learning jobs, etc.) at Yelp’s scale. In this post, we’ll describe our new in-house autoscaler called Clusterman (the “Cluster Manager”) and its magical ability to unify autoscaling resource requests for diverse workloads. We’ll also describe the Clusterman simulator,...

07:20

Saturday Morning Breakfast Cereal - Unfinished Business [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Tiny apartment. Working for other people all the time. Often cold and malicious. My God... this is where genies come from.


Today's News:

02:10

Building Flatpak apps in Gnome Builder on Fedora Silverblue [Fedora Magazine]

If you are developing software using Fedora Silverblue, and especially if what you are developing is a Gnome application, Gnome Builder 3.30.3 feels like an obvious choice of IDE.

In this article, I will show you how you can create a simple Gnome application, and how to build it and install it as a Flatpak app on your system.

Gnome and Flatpak applications

Builder has been a part of Gnome for a long time. It is a very mature IDE to me in terms of consistency and completeness.

The Gnome Builder project website offers extensive documentation regarding Gnome application development — I highly recommend spending some time there to anyone interested.

Editor’s note: Getting Builder

Because the initial Fedora Silverblue installation doesn’t include Builder, let’s walk through the installation process first.

Starting with a freshly installed system, the first thing you’ll need to do is to enable a repository providing Builder as a Flatpak — we’ll use Flathub which is a popular 3rd-party repository with many desktop apps.

To enable Flathub on your system, download the repository file from the Fedora Quick Setup page, and double-click it which opens Gnome Software asking you to enable this repository on your system.

After you’re done with that, you can search for Builder in Gnome Software and install it.

Creating a new project

So let’s walk through the creation of a new project for our Gnome app. When you start Gnome Builder, the first display is oriented towards project management.

To create a new project, I clicked on the New… button at the top-left corner which showed me the following view.

You’ll need to fill out the project name, choose your preferred language (I chose C, but other languages will work for this example as well), and the license. Leave the version control on, and select Gnome Application as your template.

I chose gbfprtfsb as the name of my project which means Hello from Gnome 3 on Fedora SilverBlue.

The IDE creates and opens the project once you press create.

Tweaking our new project

The newly created project is opened in the Builder IDE and on my system looks like the following.

This project could be run from within the IDE right now and would give you the ever popular “Hello World!” titled gnome windowed application with a label that says, yup “Hello World!”.

Let’s get a little disruptive and mess up the title and greeting a bit. Complacency leads to mediocrity which leads to entropy overcoming chaos to enforce order, stasis, then finally it all just comes to a halt. It’s therefore our duty to shake it up at every opportunity, if only to knock out any latent entropy that may have accumulated in our systems. Towards such lofty goals, we only need to change two lines of one file, and the file isn’t even a C language file, it’s an XML file used to describe the GUI named gbfprtfsb-window.ui. All we have to do is open it and edit the title and label text, save and then build our masterpiece!

Looking at the screenshot below, I have circled the text we are going to replace. The window is a GtkApplicationWindow, and uses a GtkHeaderBar and GtkLabel to display the text we are changing. In the GtkHeaderBar we will type GBFPRTFSB for the title property. In the GtkLabel we will type Hello from Gnome 3 on Fedora SilverBlue in the label property. Now save the file to record our changes.

Building the project

Well, we have made our changes, and expressed our individualism (cough) at the same time. All that is left is to build it and see what it looks like. The build panel is located near the top of the IDE, middle right, and is represented by the icon that appears to be a brick wall being built as shown on the following picture.

Press the button, and the build process completes. You can also preview your application by clicking on the “play” button next to it.

Building a Flatpak

When we’re happy with our creation, the next step will be building it as a Flatpak. To do that, click on the title in the middle of the top bar, and then on the Export Bundle button.

Once the export has successfully completed, Gnome Builder will open a Nautilus file browser window showing the export directory, with the Flatpak bundle already selected.

To install the app on your system, simply double-click the icon which opens Gnome Software allowing you to install the app. On my system I had to enter my user password twice, which I take to be due to the fact we had no configured GPG key for the project. After it was installed, the application was shown alongside all of the other applications on my system. It can be seen running below.

I think this has successfully shown how easy it is to deploy an application as a Flatpak bundle for Gnome using Builder, and then running it on Fedora Silverblue.

Sunday, 17 February

07:51

Saturday Morning Breakfast Cereal - Just Sayin [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Hey, if you're making every side angry you must be doing something right, or maybe burning down orphanages.


Today's News:

Saturday, 16 February

08:47

Saturday Morning Breakfast Cereal - Okay [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If we slightly alter the formulation it's practically illegal NOT to sell it to you!


Today's News:

06:00

The technology behind OpenAI’s fiction-writing, fake-news-spewing AI, explained [Top News - MIT Technology Review]

The language model can write like a human, but it doesn’t have a clue what it’s saying.

Friday, 15 February

08:51

05:00

AI is reinventing the way we invent [Top News - MIT Technology Review]

The biggest impact of artificial intelligence will be to help humans make discoveries we couldn’t make on our own.

01:00

How to watch for releases of upstream projects [Fedora Magazine]

Do you want to know when a new version of your favorite project is released? Do you want to make your job as packager easier? If so, this article is for you. It introduces you to the world of release-monitoring.org. You’ll see how it can help you catch up with upstream releases.

What is release-monitoring.org?

The release-monitoring.org is a combination of two applications: Anitya and the-new-hotness.

Anitya is what you can see when visiting release-monitoring.org. You can use it to add and manage your projects. Anitya also checks for new releases periodically.

The-new-hotness is an application that catches the messages emitted by Anitya. It creates a Bugzilla issue if the project is mapped to a Fedora package.

How to use release-monitoring.org

Now that you know how it works, let’s focus on how you can use it.

Index page of release-monitoring.org

First think you need to do is to log in. Anitya provides a few options you can use to log in, including the Fedora Account System (FAS), Yahoo!, or a custom OpenID server.

Login page

When you’re logged in, you’ll see new options in the top panel.

Anitya top panel

Add a new project

Now you can add a new project. It’s always good to check whether the project is already added.

Add project form

Next, fill in the information about the project:

  • Project name – Use the upstream project name
  • Homepage – Homepage of the project
  • Backend – Backend is simply the web hosting where the project is hosted. Anitya offers many backends you can chose from. If you can’t find a backend for your project, you can use the custom backend. Every backend has its own additional fields. For example, BitBucket has you specify owner/project.
  • Version scheme – This is used to sort received versions. Right now, Anitya only supports RPM version scheme.
  • Version prefix – This is the prefix that is stripped from any received version. For example, if the tag on GitHub is version_1.2.3, you would use version_ as version prefix. The version will then be presented as 1.2.3. The version prefix v is stripped automatically.
  • Check latest release on submit – If you check this, Anitya will do an initial check on the project when submitted.
  • Distro – The distribution in which this project is used. This could be also added later.
  • Package – The project’s packaged name in the distribution. This is required when the Distro field is filled in.

When you’re happy with the project, submit it. Below you can see how your project may look after you submit.

Project page

Add a new distribution mapping

If you want to map the project to a package on a specific distribution, open up the project page first and then click on Add new distribution mapping.

Add distribution mapping form

Here you can chose any distribution already available in Anitya, fill in the package name, and submit it. The new mapping will show up on the project page.

Automatic filing of Bugzilla issues

Now you created a new project and created a mapping for it. This is nice, but how does this help you as a packager? This is where the-new-hotness comes into play.

Every time the-new-hotness sees a new update or new mapping message emitted by Anitya, it checks whether this project is mapped to a package in Fedora. For this to work, the project must have a mapping to Fedora added in Anitya.

If the package is known, the-new-hotness checks the notification setting for this package. That setting can be changed here. The last check the-new-hotness does is whether the version reported by Anitya is newer than the current version of this package in Fedora Rawhide.

If all those checks are positive, the new Bugzilla issue is filed and a Koji scratch build started. After the Koji build is finished, the Bugzilla is updated with output.

Future plans for release-monitoring.org

The release-monitoring.org system is pretty amazing, isn’t it? But this isn’t all. There are plenty of things planned for both Anitya and the-new-hotness. Here’s a short list of future plans:

Anitya

  • Add libraries.io consumer – automatically check for new releases on libraries.io, create projects in Anitya and emit messages about updates
  • Use Fedora package database to automatically guess the package name in Fedora based on the project name and backend
  • Add semantic and calendar version scheme
  • Change current cron job to service: Anitya checks for new versions periodically using a cron job. The plan is to change this to a service that checks projects using queues.
  • Support for more than one version prefix

the-new-hotness

  • File Github issues for Flathub projects when a new version comes out
  • Create pull requests in Pagure instead of filing a Bugzilla issue
  • Move to OpenShift – this should make deployment much easier than how it is now
  • Convert to Python 3 (mostly done)

Both

  • Conversion to fedora-messaging – This is already in progress and should make communication between Anitya and the-new-hotness more reliable.

Photo by Alexandre Debiève on Unsplash.

Thursday, 14 February

08:33

Saturday Morning Breakfast Cereal - Job Interview [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
In my defense, I've seen Joann Sfar do this with bubbles in a non-erotic context. Then again, he's French, so maybe it was erotic after all.


Today's News:

BAHFest MIT tickets are now on sale!



Wednesday, 13 February

16:42

Python 3.8 alpha in Fedora [Fedora Magazine]

The Python developers have released the first alpha of Python 3.8.0 and you can already try it out in Fedora! Test your Python code with 3.8 early to avoid surprises once the final 3.8.0 is out in October.

Install Python 3.8 on Fedora

If you have Fedora 29 or newer, you can install Python 3.8 from the official software repository with dnf:

$ sudo dnf install python38

As more alphas, betas and release candidates of Python 3.8 will be released, the Fedora package will receive updates. No need to compile your own development version of Python, just install it and have it up to date. New features will be added until the first beta.

Test your projects with Python 3.8

Run the python3.8 command to use Python 3.8 or create virtual environments with the builtin venv module, tox or with pipenv. For example:

$ git clone https://github.com/benjaminp/six.git
Cloning into 'six'...
$ cd six/
$ tox -e py38
py38 runtests: commands[0] | python -m pytest -rfsxX
================== test session starts ===================
platform linux -- Python 3.8.0a1, pytest-4.2.1, py-1.7.0, pluggy-0.8.1
collected 195 items

test_six.py ...................................... [ 19%]
.................................................. [ 45%]
.................................................. [ 70%]
..............................................s... [ 96%]
....... [100%]
========= 194 passed, 1 skipped in 0.25 seconds ==========
________________________ summary _________________________
py38: commands succeeded
congratulations 🙂

What’s new in Python 3.8

So far, only the first alpha was released, so more features will come. You can however already try out the new walrus operator:

$ python3.8
Python 3.8.0a1 (default, Feb 7 2019, 08:07:33)
[GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> while not (answer := input('Say something: ')):
... print("I don't like empty answers, try again...")
...
Say something:
I don't like empty answers, try again...
Say something: Fedora
>>>

And stay tuned for Python 3.8 as python3 in Fedora 31!

06:52

Saturday Morning Breakfast Cereal - Logic Gates [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you can do logic gates in your head, please check to confirm you aren't a replicant. Thank you.


Today's News:

02:38

Convert your Fedora Silverblue to HTPC with Kodi [Fedora Magazine]

Ever wanted to create a HTPC from old computer laying around. Or just have some spare time and want to try something new. This article could be just for you. It will show you the step by step process to convert a Fedora Silverblue to a fully fledged HTPC.

What is Fedora Silverblue, Kodi and HTPC?

Fedora Silverblue is a system similar to Fedora Workstation. It offers an immutable filesystem (only /var and /etc are writable) and atomic updates using an ostree image, which offers reliable updates with ability to rollback to previous version easily. If you want to find out more about Fedora Silverblue visit https://silverblue.fedoraproject.org/ or if you want to try it by yourself you can get it here.

Kodi is one of the best multimedia player available. It provides plenty of features (like automatic downloads of metadata for movies, support for UPnP etc.) and it’s open source. It also has many addons. So if you are missing any functionality you could probably find an addon for it.

HTPC is just an acronym for Home Theater PC in simple words a PC that is mainly used as an entertainment station. You can connect it to TV or any monitor and just use it to watch your favorite movies, TV shows or listen to your favorite music.

Why choosing Silverblue to create an HTPC?

So why choosing Fedora Silverblue for HTPC? The main reasons are:

  • Reliability – you don’t need to fear that after update everything stop working and if it does, I can rollback easily
  • New technology – it is a good opportunity to play with a new technology.

And why to choose Kodi ? As stayted before it’s one of the best multimedia player and it’s packaged as a flatpak, which make it easy to install on Silverblue.

Conversion of Fedora Silverblue to HTPC

Let’s go step by step through this process and see how to create a fully usable HTPC from Fedora Silverblue.

1. Installation of Fedora Silverblue

First thing you need to do is to install Fedora Silverblue, this guide will not cover the installation process, but you can expect similar process as with standard Fedora Workstation installation. You can get the Fedora Silverblue ISO here

Don’t create any user during the installation, just set root password. We will create a user for Kodi later.

2. Creation of user for Kodi

When you are in the terminal logged as root, you need to create a user that will be used by Kodi. This can be done using the useradd command.

Go through GNOME initial setup and create a kodi user. You will need to provide a password. The created kodi user will have sudo permissions, but we will remove them at the end.

It’s also recommended you upgrade Fedora Silverblue. Press the Super key (this is usually the key between Alt and Ctrl) and type terminal. Then start the upgrade.

rpm-ostree upgrade

And reboot the system.

systemctl reboot

3. Installation of Kodi from Flathub

Open a terminal and add a Flathub remote repository.

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

With the Flathub repository added the installation of Kodi is simple.

flatpak install flathub tv.kodi.Kodi

4. Set Kodi as autostart application

First, create the autostart directory.

mkdir -p /home/kodi/.config/autostart

Then create a symlink for the Kodi desktop file.

ln -s /var/lib/flatpak/exports/share/applications/tv.kodi.Kodi.desktop /home/kodi/.config/autostart/tv.kodi.Kodi.desktop

5. Set autologin for kodi user

This step is very useful together with autostart of Kodi. Every time you restart your HTPC you will end up directly in Kodi and not in the GDM or GNOME shell. To set the auto login you need to add the following lines to /etc/gdm/custom.conf to the [daemon] section

AutomaticLoginEnable=True       
AutomaticLogin=kodi

6. Enable automatic updates

For HTPC automatic updates we will use systemd timers. First create a /etc/systemd/system/htpc-update.service file with following content.

[Unit]
Description=Update HTPC

[Service]
Type=oneshot
ExecStart=/usr/bin/sh -c 'rpm-ostree upgrade; flatpak update -y; systemctl reboot'

Then create a /etc/systemd/system/htpc-update.timer file with following content.

[Unit]
Description=Run htpc-update.service once a week

[Timer]
OnCalendar=Wed *-*-* 04:00:00

Start the timer from terminal.

systemctl start htpc-update.timer

You can check if the timer is set with the following command.

systemctl list-timers

This timer will run at 4:00 a.m. each Wednesday. It is recommended to set this to a time when nobody will use the HTPC.

7. Remove root permissions

Now you don’t need root permissions for kodi anymore, so remove it from the wheel group. To do this type following command in a terminal.

sudo usermod -G kodi kodi

8. Disable GNOME features

There are few GNOME features that could be annoying when using Fedora Silverblue as HTPC. Most of these features could be setup directly in Kodi anyway, so if you want them later it’s easy to set them directly in Kodi.

To do this, type the following commands.

# Display dim
dconf write "/org/gnome/settings-daemon/plugins/power/idle-dim" false

# Sleep over time/
dconf write "/org/gnome/settings-daemon/plugins/power/sleep-inactive-ac-type" 0

# Screensaver
dconf write "/org/gnome/desktop/screensaver/lock-enabled" false

# Automatic updates through gnome-software
dconf write "/org/gnome/software/download-updates" false

And that’s it, you just need to do one last restart to apply the dconf changes. After the restart you will end up directly in Kodi.

Kodi

What now?

Now I will recommend you to play with the Kodi settings a little bit and set it up to your liking. You can find plenty of guides on the internet.

If you want to automate the process you can use my ansible script that was written just for this occasion.

EDITOR’S NOTE: This article has been edited since initial publication to reflect various improvements and to simplify the procedure.


Photo by Sven Scheuermeier on Unsplash

Tuesday, 12 February

07:52

Saturday Morning Breakfast Cereal - Robot Love [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
This is what people mean when they say 'think of the children' right?


Today's News:

Monday, 11 February

07:15

Saturday Morning Breakfast Cereal - Rapunzel [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I actually looked up the weight of hair, and this doesn't really work because hair is something like .1 kg per foot. However, since it's a free body diagram, we can round that up to 10 and it's fine.


Today's News:

Dorkwads of London! BAHFest is still a month out and half the tickets are gone! Buy soon!

02:00

Deploy a Django REST service on OpenShift [Fedora Magazine]

In a previous article we have seen how to build a “To Do” application using the Django REST Framework. In this article we will look on how we can use Minishift to deploy this application on a local OpenShift cluster.

Prerequisites

This article is the second part of a series, you should make sure that you have read the first part linked right below. All the code from the first part is available on GitHub.

Getting started with Minishift

Minishift allows you to run a local OpenShift cluster in a virtual machine. This is very convenient when developing a cloud native application.

Install Minishift

To install Minishift the first thing to do is to download the latest release from their GitHub repository.

For example on Fedora 29 64 bit, you can download the following release

$ cd ~/Download
$ curl -LO https://github.com/minishift/minishift/releases/download/v1.31.0/minishift-1.31.0-linux-amd64.tgz

The next step is to copy the content of the tarball into your preferred location for example ~/.local/bin

$ cp ~/Download/minishift-1.31.0-linux-amd64.tgz ~/.local/bin
$ cd ~/.local/bin
$ tar xzvf minishift-1.31.0-linux-amd64.tgz
$ cp minishift-1.31.0-linux-amd64/minishift .
$ rm -rf minishift-1.31.0-linux-amd
$ source ~/.bashrc

You should now be able to run the minishift command from the terminal

$ minishift version
minishift v1.31.0+cfc599

Set up the virtualization environment

To run, Minishift needs to create a virtual machine, therefore we need to make sure that our system is properly configured. On Fedora we need to run the following commands:

$ sudo dnf install libvirt qemu-kv
$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt
$ sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 -o /usr/local/bin/docker-machine-driver-kvm
$ sudo chmod +x /usr/local/bin/docker-machine-driver-kv

Starting Minishift

Now that everything is in place we can start Minishift by simply running:

$ minishift start
-- Starting profile 'minishift'
....
....

The server is accessible via web console at:
https://192.168.42.140:8443/console

Using the URL provided (make sure to use your cluster IP address) you can access the OpenShift web console and login using the username developer and password developer.

If you face any problem during the Minishift installation, it is recommended to follow the details of the installation procedure.

Building the Application for OpenShift

Now that we have a OpenShift cluster running locally, we can look at adapting our “To Do” application so that it can deployed on the cluster.

Working with PostgreSQL

To speed up development and make it easy to have a working development environment in the first part of this article series we used SQLite as a database backend. Now that we are looking at running our application in a production like cluster we add support for PostgreSQL.

In order to keep the SQLite setup working for development we are going to create a different settings file for production.

$ cd django-rest-framework-todo/todo_app
$ mkdir settings
$ touch settings/__init__.py
$ cp settings.py settings/local.py
$ mv settings.py settings/production.py
$ tree settings/
settings/
├── __init__.py
├── local.py
└── production.py

Now that we have 2 settings files — one for local development and one for production — we can edit production.py to use the PostgreSQL database settings.

In todo_app/settings/productions.py replace the DATABASE dictionary with the following:

DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "todoapp",
"USER": "todoapp",
"PASSWORD": os.getenv("DB_PASSWORD"),
"HOST": os.getenv("DB_HOST"),
"PORT": "",
}
}

As you can see, we are using Django PostgreSQL backend and we are also making use of environment variables to store secrets or variables that are likely to change.

While we are editing the production settings, let’s configure another secret the SECRET_KEY, replace the current value with the following.

SECRET_KEY = os.getenv("DJANGO_SECRET_KEY")
ALLOWED_HOSTS = ["*"]
DEBUG = False

REST_FRAMEWORK = {
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
)
}

We edited the ALLOWED_HOSTS variable to allow any host or domain to be served by django and we have also set the DEBUG variable to False. Finally we are configuring the Django REST Framework to render only JSON, this means that we will not have an HTML interface to interact with the service.

Building the application

We are now ready to build our application in a container, so that it can run on OpenShift. We are going to use the source-to-image (s2i) tool to build a container directly from the git repository. That way we do not need to worry about maintaining a Dockerfile.

For the s2i tool to be able to build our application, we perform a few changes to our repository. First, let’s create a requirements.txt file to list the dependencies needed by the application.

Create django-rest-framework-todo/requirement.txt and add the following:

django
djangorestframework
psycopg2-binary
gunicorn

psycopg2-binary is the client use to connect to PostgreSQL database, and gunicorn is the web server we are using to serve the application.

Next we need to make sure to use the production settings. In django-rest-framework-todo/manage.py and django-rest-framework-todo/wsgi.py edit the following line:

os.environ.setdefault('DJANGO_SETTINGS_MODULE','todo_app.settings.production')

Application Deployment

That’s it, we can now create a new project in OpenShift and deploy the application. First let’s login to Minishift using the command line tool.

$ oc login
Authentication required for https://192.168.42.140:8443 (openshift)
Username: developer
Password: developer
Login successful.
....
$ oc new-project todo
Now using project "todo" on server "https://192.168.42.140:8443".
....

After login in the cluster we have created a new project “todo” to run our application. The next step is to create a PostgreSQL application .

 $ oc new-app postgresql POSTGRESQL_USER=todoapp POSTGRESQL_DATABASE=todoapp POSTGRESQL_PASSWORD=todoapp

Note that we are passing the environment variable needed to configure the database service, these are the same as our application settings.

Before we create our application, we need to know what is the database host address.

$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgresql ClusterIP 172.30.88.94 5432/TCP 3m

We will use the CLUSTER-IP to configure the DB_HOST environment variable of our Django application.

Let’s create the application:

oc new-app centos/python-36-centos7~https://github.com/cverna/django-rest-framework-todo.git#production DJANGO_SECRET_KEY=a_very_long_and_random_string DB_PASSWORD=todoapp DB_HOST=172.30.88.9

We are using the centos/python-36-centos7 s2i image with a source repository from GitHub. Then we set the needed environment variable DJANGO_SECRET_KEY, DB_PASSWORD and DB_HOST.

Note that we are using the production branch from that repository and not the default master branch.

The last step is to make the application available outside of the cluster. For this execute the following command.

$ oc expose svc/django-rest-framework-todo
$ oc get route
NAME HOST/PORT
django-rest-framework-todo django-rest-framework-todo-todo.192.168.42.140.nip.io

You can now use the HOST/PORT address to access the web service.

Note that the build take couple minutes to complete.

Testing the application

Now that we have our service running we can use HTTPie to easily to test it. First let’s install it.

$ sudo dnf install httpie

We can now use the http command line to send request to our serivce.

$ http -v GET http://django-rest-framework-todo-todo.192.168.42.140.nip.io/api/todo/
....
[]

$ http -v POST http://django-rest-framework-todo-todo.192.168.42.140.nip.io/api/todo/ title="Task 1" description="A new task"
...
{
"description": "A new task",
"id": 1,
"status": "todo",
"title": "Task 1"
}
$ http -v PATCH http://django-rest-framework-todo-todo.192.168.42.140.nip.io/api/todo/1 status="wip"
{
"status": "wip"
}
$ http --follow -v GET http://django-rest-framework-todo-todo.192.168.42.140.nip.io/api/todo/1
{
"description": "A new task",
"id": 1,
"status": "todo",
"title": "Task 1"
}

Conclusion

In this article, we have learned how to install Minishift on a local development system and how to build and deploy a Django REST application on OpenShift. The code for this article is available on GitHub.


Photo by chuttersnap on Unsplash

Sunday, 10 February

07:15

Saturday Morning Breakfast Cereal - Cave [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
I wonder if there's a rat experiment analog for spoiler-related anger.


Today's News:

Saturday, 09 February

Friday, 08 February

07:09

Saturday Morning Breakfast Cereal - By Jove [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
This is an actual proposal I came across while researching for a project.


Today's News:

Pssst. Hey, dorks of London. Want to see the world's nerdiest comedy night ever?



Thursday, 07 February

09:14

Saturday Morning Breakfast Cereal - Recommendations [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
The weird part is when yelp recommends for you to go drink borax and you do it because they've never been wrong in the past.


Today's News:

03:45

Fedora logo redesign [Fedora Magazine]

The current Fedora Logo has been used by Fedora and the Fedora Community since 2005. However, over the past few months, Máirín Duffy and the Fedora Design team, along with the wider Fedora community have been working on redesigning the Fedora logo.

Far from being just an arbitrary logo change, this process is being undertaken to solve a number of issues encountered with the current logo. Some of the issues with the current logo include the lack of a single colour variant, and, consequently the logo not working well on dark backgrounds. Other challenges with the current logo is confusion with other well-known brands, and the use of a proprietary font.

The new Fedora Logo design process

Last month, Máirín posted an amazing article about the history of the Fedora logo, a detailed analysis of the challenges with the current logo, and a proposal of two candidates. A wide ranging discussion with the Fedora community followed, including input from Matt Muñoz, the designer of the current Fedora logo. After the discussions, the following candidate was chosen for further iteration:

In a follow-up post this week, Máirín summarizes the discussions and critiques that took place around the initial proposal, and details the iterations that took place as a result.

After all the discussions and iterations, the following 3 candidates are where the team is currently at:

Join the discussion on the redesign over at Máirín’s blog, and be sure to read the initial post to get the full story on the process undertaken to get to this point.

Wednesday, 06 February

07:52

Saturday Morning Breakfast Cereal - Monotreme [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
In retrospect, we really should've organized the entire nomenclature system around hole-count.


Today's News:

01:00

4 cool new projects to try in COPR for February 2019 [Fedora Magazine]

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.

CryFS

CryFS is a cryptographic filesystem. It is designed for use with cloud storage, mainly Dropbox, although it works with other storage providers as well. CryFS encrypts not only the files in the filesystem, but also metadata, file sizes and directory structure.

Installation instructions

The repo currently provides CryFS for Fedora 28 and 29, and for EPEL 7. To install CryFS, use these commands:

sudo dnf copr enable fcsm/cryfs
sudo dnf install cryfs

Cheat

Cheat is a utility for viewing various cheatsheets in command-line, aiming to help remind usage of programs that are used only occasionally. For many Linux utilities, cheat provides cheatsheets containing condensed information from man pages, focusing mainly on the most used examples. In addition to the built-in cheatsheets, cheat allows you to edit the existing ones or creating new ones from scratch.

Installation instructions

The repo currently provides cheat for Fedora 28, 29 and Rawhide, and for EPEL 7. To install cheat, use these commands:

sudo dnf copr enable tkorbar/cheat
sudo dnf install cheat

Setconf

Setconf is a simple program for making changes in configuration files, serving as an alternative for sed. The only thing setconf does is that it finds the key in the specified file and changes its value. Setconf provides only a few options to change its behavior — for example, uncommenting the line that is being changed.

Installation instructions

The repo currently provides setconf for Fedora 27, 28 and 29. To install setconf, use these commands:

sudo dnf copr enable jamacku/setconf
sudo dnf install setconf

Reddit Terminal Viewer

Reddit Terminal Viewer, or rtv, is an interface for browsing Reddit from terminal. It provides the basic functionality of Reddit, so you can log in to your account, view subreddits, comment, upvote and discover new topics. Rtv currently doesn’t, however, support Reddit tags.

Installation instructions

The repo currently provides Reddit Terminal Viewer for Fedora 29 and Rawhide. To install Reddit Terminal Viewer, use these commands:

sudo dnf copr enable tc01/rtv
sudo dnf install rtv

Tuesday, 05 February

17:00

Yelp Dataset Challenge: Round 11 Winners [Yelp Engineering and Product Blog]

The eleventh round of the Yelp Dataset Challenge ran throughout the first half of 2018 and we received many impressive, original, and fascinating submissions. As usual, we were struck by the quality of the entries: keep up the good work, folks! Today, we are proud to announce the grand prize winner of the $5,000 award: “Generalized Latent Variable Recovery for Generative Adversarial Networks” by Nicholas Egan, Jeffrey Zhang, and Kevin Shen (from the Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science). The authors used a Deep Convolutional Generative Adversarial Network (DCGAN) to create photo-realistic pictures of food...

07:15

Saturday Morning Breakfast Cereal - Trolley [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
If you can spare five people from hearing another trolley problem joke by telling just one person a trolley problem joke, is it moral?


Today's News:

Monday, 04 February

08:29

Saturday Morning Breakfast Cereal - Times Have Changed [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
Really, how can people NOT be optimistic about the future?


Today's News:

01:00

Install Fedora remotely using Live media [Fedora Magazine]

Say a friend or relative wants to install Fedora, but there are some wrinkles that make them less confident about running the installer themselves. For instance, they might want to save existing content without swapping out the hard drive, which involves shrinking filesystems, not for the inexperienced. This article walks you through a process that allows you to help them install remotely.

Naturally, they need to trust you a lot for this procedure (and you them), since they are giving you total access to the machine. I’ll call them “the client.”

Step 1. They need to download the Live Media from https://getfedora.org and write it to a USB stick.  I used the Cinnamon Spin, but nothing in this article should be specific to a Desktop Environment.   You’ll need to talk them through all this if needed.  There are also instructions on getfedora.org.

Step 2. The client inserts the USB drive into the machine to be installed and boost from USB.  The exact steps to enable USB boot are device specific, and beyond the scope of this article. You may want to make sure the client has access to their product documentation. Or you can ask them for the make and model number of their system, and look up the docs on the internet.

Step 3. Have them connect to the internet via local Wifi or Ethernet, and have them run Firefox to check that it is working.  Send them to this very article, so they can copy and paste relevant commands when you tell them to if needed.

Step 4. Now have them start a terminal from the menu.

[liveuser@localhost-live ~]$ passwd
Changing password for user liveuser.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[liveuser@localhost-live ~]$ sudo systemctl start sshd
[liveuser@localhost-live ~]$ ifconfig

Sshd will not allow remote logins with an empty password, so this step assigns a password, which the client will need to share with you. I suggest a series of simple but random words.

The Live media includes pidgin (or a similar chat client for other DEs). It would be helpful to have the client start pidgin and login to a trusted server. I suggest installing jabberd on a Fedora server with a public IP, and allowing open registration. I’ll skip the details for this article. With the client on pidgin with SSL on an XMPP server you trust/control, you can share the password more securely than over the phone.  (Installing OTR would be yet another step to talk them through.)

Now the order of business is to let you connect securely to the client machine.  Have the client share the output of the ifconfig command with you.  If he has a public IP4 or IP6, and you can connect to it, you can skip to step 6.  You can also save steps if they are on a LAN that doesn’t block ethertype 0xfc00 and other Cjdns nodes are on the LAN — but that’s unlikely enough we’ll skip the details.

Step 5. If you are here, your client is in “IP4 NAT jail”, and you need to help him escape by setting up a VPN.  The simplest VPN to setup is Cjdns, but since you don’t want to talk the client through setting even that up, you’ll also need a trusted machine accessible via IP4 on which you can give the client an unprivileged shell account for bootstrapping.  Have the client login to your server with an SSH remote tunnel:

[liveuser@localhost-live ~]$ ssh -R8022:localhost:22 username@shared.example.net
The authenticity of host 'shared.example.net' can't be established.
ECDSA key fingerprint is SHA256:kRfekGaa456ga34tddrgg8kZ3VmBbqlx6vZZwhcRpuc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'shared.example.net' (ECDSA) to the list of known hosts.
username@shared's password:
Last login: Wed Jan 23 18:15:38 2019 from 2001:db8:1234::1019
[theirlogin@shared ~]$

Now you can login to their machine and install Cjdns.  Login to shared.example.net and then into the client machine:

[yourlogin@shared ~]$ ssh -p8022 liveuser@localhost
liveuser@localhost's password:
Last login: Wed Jan 23 18:16:36 2019 from ::1
[liveuser@localhost-live ~]$

Install and configure Cjdns on the client, using these instructions if you are not already familar, and also on your own workstation if you haven’t already.  You could skip installing cjdns-tools and cjdns-selinux on the client since this is a temporarily setup.  But you’ll need the tools to help debug any glitches. 

Run ifconfig tun0 and copy the client’s Cjdns VPN IP to your local /etc/hosts file with a suitable nickname.  I’ll use the nickname h.client for this article. 

[you@yourworkstation ~] $ sudo su -
# echo fc3f:26b0:49ec:7bc7:a757:b6eb:1eae:714f h.client >>/etc/hosts

Verify that you can login to liveuser@h.client from your workstation, and then you can logout of your tunnel login.

Step 6. Install x2goserver on the client.  Tigervnc would be lighter weight for a limited machine, but x2go easily connects to the liveuser desktop so they can see what you are doing for education and transparency.  Some spins include a built-in remote desktop feature as well, but I like x2go. 

Run x2goclient on your workstation, and create a new session:

  • Session name: h.client
  • Host: h.client
  • Login: liveuser
  • Session type: Connect to local desktop

 Now you can do your expert stuff while the client watches. For shrinking existing partitions, I recommend installing gparted and running it before the live install.

Step 7. When the Live Install is finished, the newly installed root filesystem should still be mounted as /mnt/sysimage. Double check, then copy the cjdns config to the new system and enable sshd. Incoming port 22 should be open by default.

[liveuser@localhost-live ~]$ sudo cp /etc/cjdroute.conf /mnt/sysimage/etc 
[liveuser@localhost-live ~]$ sudo systemctl --root=/mnt/sysimage enable sshd

You should also install cjdns (or whatever VPN you used instead) on the new system so that the client doesn’t need to do the SSH rigamarole again after rebooting.

[liveuser@localhost-live ~]$ sudo dnf install cjdns --installroot=/mnt/sysimage 
[liveuser@localhost-live ~]$ sudo systemctl --root=/mnt/sysimage enable cjdns

Step 8. You should now be ready to reboot! If something goes wrong, your client can boot from the Live Media and do the SSH routine from step 5 again so you can diagnose what went wrong.


Photo by Steve Johnson on Unsplash.

Sunday, 03 February

08:06

Saturday Morning Breakfast Cereal - Homework [Saturday Morning Breakfast Cereal]



Click here to go see the bonus panel!

Hovertext:
This is one of those comics where I'm sure someone somewhere must've beat me to the idea.


Today's News:

Saturday, 02 February

Friday, 01 February

09:51

Executing a Sunset [Code as Craft]

We all know how exciting it is to build new products, the thrill of a pile of new ideas waiting to be tested, new customers to reach, knotty problems to solve, and dreams of upward-sloping graphs.  But what happens when it is no longer aligned with the trajectory of the company. Often, the product, code, and infrastructure become a lower priority, while the team moves on to the next exciting new venture. In 2018, Etsy sunset Etsy Wholesale, Etsy Studio, and Etsy Manufacturing, three customer-facing products.

In this blog post, we will explore how we sunset these products at Etsy. This process involves a host of stakeholders including marketing, product, customer support, finance and many other teams, but the focus of this blog post is on engineering and the actual execution of the sunset.

ProcessPre-code deletion

Use Feature Flags and Turn off Traffic

Once the communication had been done through emails, in-product announcements, and posts in the user forums, we started focusing on the execution. Prior to the day of each sunset, we used our feature flagging infrastructure to build a switch to disable access to the interface for Wholesale and Manufacturing. Feature flags are an integral part of the continuous deployment process at Etsy. Feature flags reinforce the benefits of small changes and continuous delivery.

On the day of the sunset, all we had to do was deploy a one line configuration change and the product was shut off since there was a feature flag that controlled access to these products.

A softer transition is often preferable to a hard turn off. For example, we disabled the ability for buyers to create new orders one month before shutting Etsy Wholesale off. That gave sellers a chance to service the orders that remained on-platform, avoiding a mad-dash at the end.

Export Data for Users

Once the Etsy Wholesale platform was turned off, we created data export files for each seller and buyer with information about every order they received or placed during the five years that the platform was active. Generating and storing these files in one shot allowed us to clean up the wholesale codebase without fear that parts of it would be needed later for exporting data.

Set Up Redirects

We highly recommend redirects through feature flags,  but a hard DNS redirect might be required in some circumstances. The sunset of Etsy Studio was complicated by the fact that in the middle of this project, etsy.com was being migrated from on-premise hosting to the cloud. To reduce complexity and risk for the massive cloud migration project, Etsy Studio had to be shut off before the migration began. On the day before the cloud migration, a DNS redirect was made to forward any request on etsystudio.com to a special page on etsy.com that explained that Etsy Studio was being shut down. Once the DNS change went live, it effectively shut off Etsy Studio completely.

Code Deletion Methodology:

Once we confirmed that all three products were no longer receiving traffic, we kicked off the toughest part of the engineering process: deleting all the code. We tried to phase it in two parts, as tightly and loosely integrated products. Integrations in the most sensitive/dangerous spots were prioritized, and safer deletions were done later as we were heading into the holiday season (our busiest time of the year).

For Etsy Wholesale and Etsy Manufacturing, we had to remove the code piece-by-piece because it was tightly integrated with other features on the site. For Etsy Studio, we thought we would be able to delete the code in one massive commit. One benefit of our continuous integration system is that we can try things out, fail, and revert without negatively affecting our users. This proved valuable as when we tried deleting the code one massive commit, some unit tests for the Etsy codebase started failing. We realized that small dependencies between the code had formed over time. We decided to delete the code in smaller, easier to test, chunks.
 

a small example of dependencies creeping in where you least expect them.

Challenges: Planning (or lack of it) for slowdowns

Interdependencies

During the process of sunsetting, we didn’t consider how busy other teams would be heading into the holiday season. This slowed down the process of getting code reviews approved. This became especially crucial for us since we were removing and modifying big chunks of code maintained by other teams.

There were also several other big projects in flight while we were trying to delete code across our code base and that slowed us down. One example that I already mentioned was cloud migration: we couldn’t shut off Etsy Studio using a config flag and we had to work around it.

Commit Size and Deploys

To reduce risk, our intention was to keep our commits small, but when trying to delete so much code at once, it’s hard to keep all your commits small. Testing and deploying was a least 50% of our team’s time. Our team made about 413 commits over five months, deleting 275,000 lines of code. That averages out to 630 lines of code deleted per commit, which were frequently deployed one at a time.

Compliance

We actively think of compliance when building new things, but it is also important to keep in mind compliance requirements when you delete code. Etsy’s SOX compliance system requires that certain files in our codebase are subject to extra controls. When we deploy changes to such files, we need additional reviews and signoffs. We had to do 44 SOX reviews since we did multiple small commits. Each review requires approvals by multiple people and this added on average a few hours to each bit of deletion we did.  Similarly, we considered user privacy and data protection in how to make retention decisions about sunsetted products, how to make data available for export, and how it impacts our terms and policies.

Deleting so much code can be a difficult process. We had to revert changes from production at least five times, which, for the most part was simple. One of these five reverts was complicated by a data corruption issue affecting a small population of sellers, which required several days of work to write, test, and run a script to fix the problem.

The Outcome

We measured success using the following metrics:

  • Code Deletion: 275,000
  • Test Coverage: We caused a slight drop in test coverage metrics because the Etsy Studio project was well above average, while Etsy Wholesale and Etsy Manufacturing were just slightly below average.
  • System Complexity: Across 8+ Etsy systems: listing management, listing page, payments, search indexes, authentication, member conversations, analytics, member services and our global header user interface,
  • Operational hours: Saved 152 member support hours a month and about 320 engineering hours a month

From 1000s of error logs a day for wholesale, to less than 100 (eventually we got this to zero)

The roots that three products had in our systems demonstrated the challenges in building and maintaining a standalone product alongside our core marketplace. The many branching pieces of logic that snuck in made it difficult to reuse lots of existing code. By deleting 275,000 lines of code, we were able to reduce tech debt and remove roadblocks for other engineers.   

01:00

Fedora Classrooms: Silverblue and Badge Design [Fedora Magazine]

Fedora Classroom sessions continue with two introductory sessions, on using Fedora Silverblue (February 7), and creating Fedora badges designs (February 10). The general schedule for sessions is availble on the wiki, along with resources and recordings from previous sessions. Details on both these upcoming sessions follow.

Topic: Fedora Silverblue

Fedora Silverblue is a variant of Fedora Workstation that is composed and delivered using ostree technology. It uses some of the same RPMs found in Fedora Workstation but delivers them in a way that produces an “immutable host” for the end user.  This provides atomic upgrades for end users and allows users to move to a fully containerized environment using traditional containers and flatpaks.

This session is aimed at users who want to learn more about Fedora Silverblue,
ostree, rpm-ostree, containers, and Flatpaks.  It is expected that attendees have some basic Linux knowledge.

The following topics will be covered:

  • What’s an immutable host?
  • How is Fedora Silverblue different from Fedora Workstation?
  • What is ostree and rpm-ostree?
  • Upgrading, rollbacks, and rebasing your host.
  • Package layering with rpm-ostree.
  • Using containers and container tools (podman, buildah).
  • Using Flatpaks for GUI applications

When and where

Instructor

Micah Abbott is a Principal Quality Engineer working for Red Hat. He remembers his first introduction to Linux was during university when someone showed him Red Hat Linux running on a DEC Alpha Workstation.  He’s dabbled with  various distributions in the following years, but has always had a soft spot for  Fedora. Micah has recently been contributing towards the development  of  Fedora/Red Hat CoreOS and before that Project Atomic.  He enjoys engaging with the community to help solve problems that users are facing and has most recently been spending a lot of time involved with the Fedora Silverblue community.

Topic: Creating Fedora Badges Designs

Fedora Badges is a gamification system created around the hard work of the Fedora community on the various aspects of the Fedora Project. The Badges project helps to drive and motivate Fedora contributors to participate in all different parts of Fedora development, quality, content, events, and stay active in community initiatives. This classroom will explain the process of creating a design for a Fedora Badge.

Here is the agenda for the classroom session:

  • What makes a Fedora Badge?
  • Overview of resources, website, and tickets.
  • Step by step tutorial to design a badge.

Resources needed:

On Fedora, inkscape and comfortaa can be installed using dnf:

sudo dnf install inkscape aajohan-comfortaa-fonts

When and where

Instructor

Marie Nordin is a graphic designer and fine artist, with a day job as a Assistant Purchasing Manager in Rochester, NY. Marie began working on the Fedora Badges project and the Fedora Design Team in 2013 through an internship with the Outreachy program. She has maintained the design side of the Fedora Badges project for four years, as well as running workshops and teaching others how to  contribute designs to Badges.

Thursday, 31 January

01:22

Sailfish X Beta now available for Sony Xperia XA2 device range [Jolla Blog]

Today we started offering the Sailfish X Beta software package including the much awaited Android application support. You can get it now for your Sony Xperia XA2, Xperia XA2 Plus, and Xperia XA2 Ultra devices from the Jolla Shop!

The awaited Android application support includes major architectural changes, and upgrades the support from Android 4.4.4 to Android 8.1, significantly improving the Android app compatibility. This is a major upgrade in the Android Runtime in Sailfish OS as it will open up a wide range of additional apps to enjoy in your beloved Sailfish device and, even though still in beta, we’re confident that you’ll enjoy using it to take full advantage of the latest apps – if you prefer to use Android apps on your device of course.

As already stated in our earlier blog post, the now released version of the Android app support is a public beta. At this stage it still has many known issues but we still wanted to make it available, and thus we had to make the call to offer the Android app support part of the package free of charge for the time being until the beta label is removed.

The Sailfish X beta will cost 29,90€ for the time being, and comes with predictive text input, MS Exchange Support, software updates and support, and a free beta option of the updated Android application support. If you purchase Sailfish X Beta for your XA2 range device now, you will get the future updates, removing the beta tag, without any additional costs.

How is Sailfish OS Android 8.1 Apps Support better than the earlier 4.4.4?

Along with this upgrade comes many improvements, and here are a couple of them:

  • Newer APIs allowing many more apps to work
  • Many mobile banking apps and apps that require non-rooted devices are compatible
  • More secure with latest security fixes
  • Improved memory handling
  • Improved performance
  • Better notification integration

Why do we call it Sailfish X Beta?

We’ve seen that many of our community members and Sailfish fans have been eagerly waiting for this update for some time now, and thus we wanted to release it even though it still has some known issues. Here is the list of the main common issues you will face with Sailfish X Beta on your Xperia XA2 range device. More details about these can be found in the Release notes.

1. Mobile data is not detected properly by some Android apps. These issues occur e.g. with Aptoide, Spotify, Messenger, and Twitter. We recommend to use WLAN for setting up Android apps.
2. Recently created files (e.g. photos) are not always detected by Android apps (device restart may be needed)
3. Audio and multimedia are not fully functional in all apps
4. Battery consumption is high when connected to WLAN networks

In addition, please remember that Sailfish OS Android Runtime does not support Google Play services; therefore certain apps requiring those services may not function properly.

Further info on Sailfish X can be found in Jolla.com and to put it simply here is what is currently available:

SailfishX_Table20190131

The non-beta Sailfish X software package is currently available for Sony Xperia X devices only (with the older Android Runtime).

We hope you will enjoy Sailfish X Beta on your preferred Xperia XA2 device! As there are issues with the release, please provide feedback to us at together.jolla.com: report issues, ask questions, help others by providing answers, and comment or vote on the issues. This helps us to prioritize our work to fix most important issues first.

Keep on sailing!
Vesku

The post Sailfish X Beta now available for Sony Xperia XA2 device range appeared first on Jolla Blog.

Wednesday, 30 January

01:00

5 quick tips for Fedora Workstation users [Fedora Magazine]

Whether you are a new or long time Fedora Workstation user, you might be looking for some quick tips to customize, tweak or enhance your desktop experience. In this article, we’ll round up five tips to help you get more out of your Fedora Workstation.

Enhancing photos with GNOME Photos

GNOME Photos is a photo library application for sorting and organizing your photo library. Additionally it features basic image editing tools for quick edits. This article walks you through the basics of editing images with GNOME Photos

Try Visual Studio Code

Visual Studio Code is a Open Source text editor that includes debugging features, embedded git control, syntax highlighting, intelligent code completion, snippets, and code refactoring tools. This article walks you through how to install Visual Studio Code on Fedora. Additionally it also covers basic usage tips.

Dash to Dock Extension

Dash to Dock takes the dock that is visible in the GNOME Shell Overview, and places it on the main desktop. This provides a view of open applications at a glance, and provides a quick way to switch windows using the mouse.

This article covers how to install the extension, as well as covering the basic features and settings.

Using Nautilus Scripts

Scripts in Nautilus are not a new feature, but still super useful for automating quick tasks into the File Browser.

Installing more Wallpapers

The Fedora repositories contain a treasure trove of wallpapers created for Fedora releases. This article shows you the wallpapers available from previous releases — going back to Fedora 8 — and what packages to install to get them on your current Fedora install.

Tuesday, 29 January

Monday, 28 January

05:06

3 simple and useful GNOME Shell extensions [Fedora Magazine]

The default desktop of Fedora Workstation — GNOME Shell — is known and loved by many users for its minimal, clutter-free user interface. It is also known for the ability to add to the stock interface using extensions. In this article, we cover 3 simple, and useful extensions for GNOME Shell. These three extensions provide a simple extra behaviour to your desktop; simple tasks that you might do every day.

Installing Extensions

The quickest and easiest way to install GNOME Shell extensions is with the Software Application. Check out the previous post here on the Magazine for more details:

Removable Drive Menu

Removable Drive Menu extension on Fedora 29

First up is the Removable Drive Menu extension. It is a simple tool that adds a small widget in the system tray if you have a removable drive inserted into your computer. This allows you easy access to open Files for your removable drive, or quickly and easily eject the drive for safe removal of the device.

Removable Drive Menu in the Software application

Extensions Extension.

The Extensions extension is super useful if you are always installing and trying out new extensions. It provides a list of all the installed extensions, allowing you to enable or disable them. Additionally, if an extension has settings, it allows quick access to the settings dialog for each one.

the Extensions extension in the Software application

Frippery Move Clock

Finally, there is the simplest extension in the list. Frippery Move Clock, simply moves the position of the clock from the center of the top bar to the right, next to the status area.

Sunday, 27 January

Friday, 25 January

02:00

Using Antora for your open source documentation [Fedora Magazine]

Are you looking for an easy way to write and publish technical documentation? Let me introduce Antora — an open source documentation site generator. Simple enough for a tiny project, but also complex enough to cover large documentation sites such as Fedora Docs.

With sources stored in git, written in a simple yet powerful markup language AsciiDoc, and a static HTML as an output, Antora makes writing, collaborating on, and publishing your documentation a no-brainer.

The basic concepts

Before we build a simple site, let’s have a look at some of the core concepts Antora uses to make the world a happier place. Or, at least, to build a documentation website.

Organizing the content

All sources that are used to build your documentation site are stored in a git repository. Or multiple ones — potentially owned by different people. For example, at the time of writing, the Fedora Docs had its sources stored in 24 different repositories owned by different groups having their own rules around contributions.

The content in Antora is organized into components, usually representing different areas of your project, or, well, different components of the software you’re documenting — such as the backend, the UI, etc. Components can be independently versioned, and each component gets a separate space on the docs site with its own menu.

Components can be optionally broken down into so-called modules. Modules are mostly invisible on the site, but they allow you to organize your sources into logical groups, and even store each in different git repository if that’s something you need to do. We use this in Fedora Docs to separate the Release Notes, the Installation Guide, and the System Administrator Guide into three different source repositories with their own rules, while preserving a single view in the UI.

What’s great about this approach is that, to some extent, the way your sources are physically structured is not reflected on the site.

Virtual catalog

When assembling the site, Antora builds a virtual catalog of all pages, assigning a unique ID to each one based on its name and the component, the version, and module it belongs to. The page ID is then used to generate URLs for each page, and for internal links as well. So, to some extent, the source repository structure doesn’t really matter as far as the site is concerned.

As an example, if we’d for some reason decided to merge all the 24 repositories of Fedora Docs into one, nothing on the site would change. Well, except the “Edit this page” link on every page that would suddenly point to this one repository.

Independent UI

We’ve covered the content, but how it’s going to look like?

Documentation sites generated with Antora use a so-called UI bundle that defines the look and feel of your site. The UI bundle holds all graphical assets such as CSS, images, etc. to make your site look beautiful.

It is expected that the UI will be developed independently of the documentation content, and that’s exactly what Antora supports.

Putting it all together

Having sources distributed in multiple repositories might raise a question: How do you build the site? The answer is: Antora Playbook.

Antora Playbook is a file that points to all the source repositories and the UI bundle. It also defines additional metadata such as the name of your site.

The Playbook is the only file you need to have locally available in order to build the site. Everything else gets fetched automatically as a part of the build process.

Building a site with Antora

Demo time! To build a minimal site, you need three things:

  1. At least one component holding your AsciiDoc sources.
  2. An Antora Playbook.
  3. A UI bundle

Good news is the nice people behind Antora provide example Antora sources we can try right away.

The Playbook

Let’s first have a look at the Playbook:

site:
title: Antora Demo Site
# the 404 page and sitemap files only get generated when the url property is set
url: https://example.org/docs
start_page: component-b::index.adoc
content:
sources:
- url: https://gitlab.com/antora/demo/demo-component-a.git
branches: master
- url: https://gitlab.com/antora/demo/demo-component-b.git
branches: [v2.0, v1.0]
start_path: docs
ui:
bundle:
url: https://gitlab.com/antora/antora-ui-default/-/jobs/artifacts/master/raw/build/ui-bundle.zip?job=bundle-stable
snapshot: true

As we can see, the Playbook defines some information about the site, lists the content repositories, and points to the UI bundle.

There are two repositories. The demo-component-a with a single branch, and the demo-component-b having two branches, each representing a different version.

Components

The minimal source repository structure is nicely demonstrated in the demo-component-a repository:

antora.yml            <- component metadata
modules/
ROOT/ <- the default module
nav.adoc <- menu definition
pages/ <- a directory with all the .adoc sources
source1.adoc
source2.adoc
...

The following

antora.yml
contains metadata for this component such as the name and the version of the component, the starting page, and it also points to a menu definition file.

name: component-a
title: Component A
version: 1.5.6
start_page: ROOT:inline-text-formatting.adoc
nav:
- modules/ROOT/nav.adoc

The menu definition file is a simple list that defines the structure of the menu and the content. It uses the page ID to identify each page.

* xref:inline-text-formatting.adoc[Basic Inline Text Formatting]
* xref:special-characters.adoc[Special Characters & Symbols]
* xref:admonition.adoc[Admonition]
* xref:sidebar.adoc[Sidebar]
* xref:ui-macros.adoc[UI Macros]
* Lists
** xref:lists/ordered-list.adoc[Ordered List]
** xref:lists/unordered-list.adoc[Unordered List

And finally, there’s the actual content under modules/ROOT/pages/ — you can see the repository for examples, or the AsciiDoc syntax reference

The UI bundle

For the UI, we’ll be using the example UI provided by the project.

Going into the details of Antora UI would be above the scope of this article, but if you’re interested, please see the Antora UI documentation for more info.

Building the site

Note: We’ll be using Podman to run Antora in a container. You can learn about Podman on the Fedora Magazine.

To build the site, we only need to call Antora on the Playbook file.

The easiest way to get antora at the moment is to use the container image provided by the project. You can get it by running:

$ podman pull antora/antora

Let’s get the playbook repository:

$ git clone https://gitlab.com/antora/demo/demo-site.git
$ cd demo-site

And run Antora using the following command:

$ podman run --rm -it -v $(pwd):/antora:z antora/antora site.yml

The site will be available in the

public
directory. You can either open it in your web browser directly, or start a local web server using:

$ cd public
$ python3 -m http.server 8080

Your site will be available on http://localhost:8080.

Thursday, 24 January

Wednesday, 23 January

01:00

Mind map yourself using FreeMind and Fedora [Fedora Magazine]

A mind map of yourself sounds a little far-fetched at first. Is this process about neural pathways? Or telepathic communication? Not at all. Instead, a mind map of yourself is a way to describe yourself to others visually. It also shows connections among the characteristics you use to describe yourself. It’s a useful way to share information with others in a clever but also controllable way. You can use any mind map application for this purpose. This article shows you how to get started using FreeMind, available in Fedora.

Get the application

The FreeMind application has been around a while. While the UI is a bit dated and could use a refresh, it’s a powerful app that offers many options for building mind maps. And of course it’s 100% open source. There are other mind mapping apps available for Fedora and Linux users, as well. Check out this previous article that covers several mind map options.

Install FreeMind from the Fedora repositories using the Software app if you’re running Fedora Workstation. Or use this sudo command in a terminal:

$ sudo dnf install freemind

You can launch the app from the GNOME Shell Overview in Fedora Workstation. Or use the application start service your desktop environment provides. FreeMind shows you a new, blank map by default:

FreeMind initial (blank) mind map

A map consists of linked items or descriptions — nodes. When you think of something related to a node you want to capture, simply create a new node connected to it.

Mapping yourself

Click in the initial node. Replace it with your name by editing the text and hitting Enter. You’ve just started your mind map.

What would you think of if you had to fully describe yourself to someone? There are probably many things to cover. How do you spend your time? What do you enjoy? What do you dislike? What do you value? Do you have a family? All of this can be captured in nodes.

To add a node connection, select the existing node, and hit Insert, or use the “light bulb” icon for a new child node. To add another node at the same level as the new child, use Enter.

Don’t worry if you make a mistake. You can use the Delete key to remove an unwanted node. There’s no rules about content. Short nodes are best, though. They allow your mind to move quickly when creating the map. Concise nodes also let viewers scan and understand the map easily later.

This example uses nodes to explore each of these major categories:

Personal mind map, first level

You could do another round of iteration for each of these areas. Let your mind freely connect ideas to generate the map. Don’t worry about “getting it right.” It’s better to get everything out of your head and onto the display. Here’s what a next-level map might look like.

Personal mind map, second level

You could expand on any of these nodes in the same way. Notice how much information you can quickly understand about John Q. Public in the example.

How to use your personal mind map

This is a great way to have team or project members introduce themselves to each other. You can apply all sorts of formatting and color to the map to give it personality. These are fun to do on paper, of course. But having one on your Fedora system means you can always fix mistakes, or even make changes as you change.

Have fun exploring your personal mind map!


Photo by Daniel Hjalmarsson on Unsplash.

Tuesday, 22 January

Monday, 21 January

01:00

Build a Django RESTful API on Fedora. [Fedora Magazine]

With the rise of kubernetes and micro-services architecture, being able to quickly write and deploy a RESTful API service is a good skill to have. In this first part of a series of articles, you’ll learn how to use Fedora to build a RESTful application and deploy it on Openshift. Together, we’re going to build the back-end for a “To Do” application.

The APIs allow you to Create, Read, Update, and Delete (CRUD) a task. The tasks are stored in a database and we’re using the Django ORM (Object Relational Mapping) to deal with the database management.

Django App and Rest Framework setup

In a new directory, create a Python 3 virtual environment so that you can install dependencies.

$ mkdir todoapp && cd todoapp
$ python3 -m venv .venv
$ source .venv/bin/activate

After activating the virtual environment, install the dependencies.

(.venv)$ pip install djangorestframework django

Django REST Framework, or DRF, is a framework that makes it easy to create RESTful CRUD APIs. By default it gives access to useful features like browseable APIs, authentication management, serialization of data, and more.

Create the Django project and application

Create the Django project using the django-admin CLI tool provided.

(.venv) $ django-admin startproject todo_app . # Note the trailing '.'
(.venv) $ tree .
.
├── manage.py
└── todo_app
    ├── __init__.py
    ├── settings.py
    ├── urls.py
    └── wsgi.py
1 directory, 5 files

Next, create the application inside the project.

(.venv) $ cd todo_app
(.venv) $ django-admin startapp todo
(.venv) $ cd ..
(.venv) $ tree .
.
├── manage.py
└── todo_app
├── __init__.py
├── settings.py
├── todo
│ ├── admin.py
│ ├── apps.py
│ ├── __init__.py
│ ├── migrations
│ │ └── __init__.py
│ ├── models.py
│ ├── tests.py
│ └── views.py
├── urls.py
└── wsgi.py

Now that the basic structure of the project is in place, you can enable the REST framework and the todo application. Let’s add rest_framework and todo to the list of INSTALL_APPS in the project’s settings.py.

todoapp/todo_app/settings.py
# Application definition

INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'rest_framework',
'todo_app.todo',
]

 Application Model and Database

The next step of building our application is to set up the database. By default, Django uses the SQLite database management system. Since SQLite works well and is easy to use during development, let’s keep this default setting. The second part of this series will look at how to replace SQLite with PostgreSQL to run the application in production.

The Task Model

By adding the following code to todo_app/todo/models.py, you define which properties have a task. The application defines a task with a title, description and a status. The status of a task can only be one of the three following states: Backlog, Work in Progress and Done.

from django.db import models

class Task(models.Model):
STATES = (("todo", "Backlog"), ("wip", "Work in Progress"), ("done", "Done"))
title = models.CharField(max_length=255, blank=False, unique=True)
description = models.TextField()
status = models.CharField(max_length=4, choices=STATES, default="todo")

Now create the database migration script that Django uses to update the database with changes.

(.venv) $ PYTHONPATH=. DJANGO_SETTINGS_MODULE=todo_app.settings django-admin makemigrations

Then you can apply the migration to the database.

(.venv) $ PYTHONPATH=. DJANGO_SETTINGS_MODULE=todo_app.settings django-admin migrate

This step creates a file named db.sqlite3 in the root directory of the application. This is where SQLite stores the data.

Access to the data

Creating a View

Now that you can represent and store a task in the database, you need a way to access the data.  This is where we start making use of Django REST Framework by using the ModelViewSet. The ModelViewSet provides the following actions on a data model: list, retrieve, create, update, partial update, and destroy.

Let’s add our view to todo_app/todo/views.py:

from rest_framework import viewsets

from todo_app.todo.models import Task
from todo_app.todo.serializers import TaskSerializer


class TaskViewSet(viewsets.ModelViewSet):
queryset = Task.objects.all()
serializer_class = TaskSerializer

Creating a Serializer

As you can see, the TaskViewSet is using a Serializer. In DRF, serializers convert the data modeled in the application models to a native Python datatype. This datatype can be later easily rendered into JSON or XML, for example. Serializers are also used to deserialize JSON or other content types into the data structure defined in the model.

Let’s add our TaskSerializer object by creating a new file in the project todo_app/todo/serializers.py:

from rest_framework.serializers import ModelSerializer
from todo_app.todo.models import Task


class TaskSerializer(ModelSerializer):
class Meta:
model = Task
fields = "__all__"

We’re using the generic ModelSerializer from DRF, to automatically create a serializer with the fields that correspond to our Task model.

Now that we have a data model a view and way to serialize/deserialize data, we need to map our view actions to URLs. That way we can use HTTP methods to manipulate our data.

Creating a Router

Here again we’re using the power of the Django REST Framework with the DefaultRouter. The DRF DefaultRouter takes care of mapping actions to HTTP Method and URLs.

Before we see a better example of what the DefaultRouter does for us, let’s add a new URL to access the view we have created earlier. Add the following to todo_app/urls.py:

from django.contrib import admin
from django.conf.urls import url, include

from rest_framework.routers import DefaultRouter

from todo_app.todo.views import TaskViewSet

router = DefaultRouter()
router.register(r"todo", TaskViewSet)

urlpatterns = [
url(r"admin/", admin.site.urls),
url(r"^api/", include((router.urls, "todo"))),
]

As you can see, we’re registering our TaskViewSet to the DefaultRouter. Then later, we’re mapping all the router URLs to the /api endpoint. This way, DRF takes care of mapping the URLs and HTTP method to our view actions (list, retrieve, create, update, destroy).

For example, accessing the api/todo endpoint with a GET HTTP request calls the list action of our view. Doing the same but using a POST HTTP request calls the create action.

To get a better grasp of this, let’s run the application and start using our API.

Running the application

We can run the application using the development server provided by Django. This server should only be used during development. We’ll see in the second part of this tutorial how to use a web server better suited for production.

(.venv)$ PYTHONPATH=. DJANGO_SETTINGS_MODULE=todo_app.settings django-admin runserver
Django version 2.1.5, using settings 'todo_app.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Now we can access the application at the following URL: http://127.0.0.1:8000/api/

DRF provides an interface to the view actions, for example listing or creating tasks, using the following URL: http://127.0.0.1:8000/api/todo

Or updating/deleting an existing tasks with this URL: http://127.0.0.1:8000/api/todo/1

Conclusion

In this article you’ve learned how to create a basic RESTful API using the Django REST Framework. In the second part of this series, we’ll update this application to use the PostgreSQL database management system, and deploy it in OpenShift.

The source code of the application is available on GitHub.


Sunday, 20 January

Saturday, 19 January

Friday, 18 January

01:00

How Do You Fedora: Journey into 2019 [Fedora Magazine]

Fedora had an amazing 2018. The distribution saw many improvements with the introduction of Fedora 28 and Fedora 29. Fedora 28 included third party repositories, making it easy to get software like the Steam client, Google Chrome and Nvidia’s proprietary drivers. Fedora 29 brought support for automatic updates for Flatpack.

One of the four foundations of Fedora is Friends. Here at the Magazine we’re looking back at 2018, and ahead to 2019, from the perspective of several members of the Fedora community. This article focuses on what each of them did last year, and what they’re looking forward to this year.

Fedora in 2018

Radka Janekova attended five events in 2018. She went to FOSDEM as a Fedora Ambassador, gave two presentations at devconf.cz and three presentation on dotnet in Fedora. Janekova starting using DaVinci Resolve in 2018: “DaVinci Resolve which is very Linux friendly video editor.” She did note one drawback, saying, “It may not be entirely open source though!”

Julita Inca has been to many places in the world in 2018. “I took part of the Fedora 29 Release Party in Poland where I shared my experiences of being an Ambassador of Fedora these years in Peru.” She is currently located in the University of Edinburgh. “I am focusing in getting a Master in High Performance Computing in the University of Edinburgh using ARCHER that has CentOS as Operating System.” As part of her masters degree she is using a lot of new software. “I am learning new software for parallel programming I learned openMP and MPI.” To profile code in C and Fortran she is using Intel’s Vtune

Jose Bonilla went to a DevOps event hosted by a company called Rancher. Rancher is an open source company that provides a container orchestration framework which can be hosted in a variety of ways, including in the cloud or self-hosted. “I went to this event because I wished to gain more insight into how I can use Fedora containerization in my organization and to teach students how to manage applications and services.” This event showed that the power of open source is less focus on competition and more on completion. “There were several open source projects at this event working completely in tandem without ever having this as a goal. The companies at this event were Google, Rancher, Gitlab and Aqua.” Jose used a variety of open source applications in 2018. “I used Cockpit, Portainer and Rancher OS. Portainer and Rancher are both services that manage dockers containers. Which only proves the utility of containers. I believe this to be the future of compute environments.” He is also working on tools for data analytics. “I am improving on my knowledge of Elasticsearch and the Elastic Stack — Kibana, which is an extraordinarily powerful open source set of tools for data analytics.”

Carlos Enrique Castro León has not been to a Fedora event in Peru, but listens to Red Hat Command Line Hero. “I really like to listen to him since I can meet people related to free code.” Last year he started using Kdenlive and Inkscape. “I like them because there is a large community in Spanish that can help me.”

Akinsola Akinwale started using VSCode, Calligra and Qt5 Designer in 2018. He uses VScode for Python development. For editing documents and spreadsheets he uses Calligra. “I love Vscode for its embedded VIM , terminal & easy of use.” He started using Calligra just for a change of pace. He likes the flexibility of Qt5 designed for creating graphical user interfaces instead of coding it all in Vscode.

Kevin Fenzi went to several Fedora events in 2018. He enjoyed all of them, but liked Flock in Dresden the best of them all. “At Flock in Dresden I got a chance to talk face to face with many other Fedora contributors that I only talk to via IRC or email the rest of the time. The organizers did an awesome job, the venue was great and it was all around just a great time. There were some talks that made me think, and others that made me excited to see what would happen with them in the coming year. Also, the chance to have high bandwith talks really helped move some ideas along to reality.” There were two applications Kevin started using in 2018. “First, after many years of use, I realized it was time to move on from using rdiff-backups for my backups. It’s a great tool, but it’s in python2 and very inactive upstream. After looking around I settled on borg backup and have been happily using that since. It has a few rough edges (it needs lots of cache files to do really fast backups, etc) but it has a very active community and seems to work pretty nicely.” The other application that Kevin started using in OpenShift. “Secondly, 2018 was the year I really dug into OpenShift. I understand now much more about how it works and how things are connected and how to manage and upgrade it. In 2019 we hope to move a bunch of things over to our OpenShift cluster. The OpenShift team is really doing a great job of making something that deploys and upgrades easily and are adding great features all the time (most recently the admin console, which is great to watch what your cluster is doing!).”

Fedora in 2019

Radka plans to do similar presentations in 2019. “At FOSDEM this time I’ll be presenting a story of an open source project eating servers with C#.” Janekova targets pre-university students in an effort to encourage young women to get involved in technology. “I really want to help dotnet and C# grow in the open source world, and I also want to educate the next generation a little bit better in terms of what women can or can not do.”

Julita plans on holding two events in 2019. “I can promote the use of Fedora and GNOME in Edinburgh University.” When she returns to Peru she plans on holding a conference on writing parallel code on Fedora and Gnome.

Jose plans on continuing to push open source initiatives such as cloud and container infrastructures. He will also continue teaching advanced Unix systems administration. “I am now helping a new generation of Red Hat Certified Professionals seek their place in the world of open source. It is indeed a joy when a student mentions they have obtained their certification because of what they were exposed to in my class.” He also plans on spending some more time with his art again.

Carlos would like to write for Fedora Magazine and help bring the magazine to the Latin American community. “I would like to contribute to Fedora Magazine. If possible I would like to help with the magazine in Spanish.”

Akinsola wants to hold a Fedora a release part in 2019. “I want make many people aware of Fedora, make them aware they can be part of the release and it is easy to do.” He would also like to ensure that new Fedora users have an easy time of adapting to their new OS.

Kevin is planning is excited about 2019 being a time of great change for Fedora. “In 2019 I am looking forward to seeing what and how we retool things to allow for lifecycle changes and more self service deliverables. I think it’s going to be a ton of work, but I am hopeful we will come out of it with a much better structure to carry us forward to the next period of Fedora success.” Kevin also had some words of appreciation for everyone in the Fedora community. “I’d like to thank everyone in the Fedora community for all their hard work on Fedora, it wouldn’t exist without the vibrant community we have.”


Photo by Perry Grone on Unsplash.

Thursday, 17 January

Wednesday, 16 January

23:18

Sailfish OS Sipoonkorpi is now available [Jolla Blog]

The release of Sailfish 3 has been a gratifying milestone for Jolla. Each new update completes the circle of the Sailfish 3 era, step by step, delivering new features and adding value to Sailfish OS.

This time, our name pick fell upon the woodlands of Sipoonkorpi. Sipoonkorpi is a 19 km² Finnish National park located in the municipalities of Helsinki, Vantaa and Sipoo. Sipoonkorpi is well known for its peaceful settings that combine nature and small villages to create an astonishing view.

Release Highlights

Sipoonkorpi’s beautifully diverse setting reflects in this update, which delivers plenty of features that enhance both the functionality and design of Sailfish OS. Key elements are related to security, communication and user experience. Also, we’ve enhanced the light ambience feature by adding basic support for user-generated light ambiences.

Firewall

Privacy is one of our top priorities, and our focus on security is reflected on each of the updates made to Sailfish OS. We understand our corporate partners’ need for a secure system and one part of that is to provide dynamic security for network connections. A good example is when you connect to a wireless access point we can restrict the network traffic based on configuration added to system. This firewall configuration is set to block ICMP requests and for developer mode it allows access to SSH only over WiFi or USB.

Light Ambiences

User generated ambiences as a way to personalize your device has always been a key feature in Sailfish OS. In 3.0.0 we added two new ambiences with a dramatically different style with a light background and dark text. Now we’ve expanded this, and you can create light ambiences from any of your favourite pictures on your device. Light ambiences can easily be created from the gallery by selecting your favorite picture and then pressing the “Ambience” icon and choosing light as the style for the created ambience.

light ambience

Image Editing

We have added a redesigned image editing dialog that enables you to apply several actions at once such as cropping, changing brightness, contrast and rotating dialogs. After editing, both the original version and edited version are saved. Also, the edited version will be opened automatically, which allows you to see the changes made.

light ambience 4

Look & feel

For style and improved legibility we have added a nice blur effect to the backgrounds in Top Menu, App Grid and system dialogs. Also, you can choose to see the current weather information on the Lock screen.

 

Blur_UIscreens

Localisation

Bulgarian language was added to Sailfish OS. Massive thanks to a handful of Bulgarian students for translating the OS from scratch, благодарим!

Sailfish X

The 3.0.1 update will be delivered to all devices supported in the Sailfish X program. With this update we will expand Sailfish X to support Planet Computer’s Gemini PDA. We’ll be opening downloads of the Free trial version of Sailfish X for Gemini PDA with a beta release as soon as few final details in distribution have been solved. We will notify of this separately.

Further, the Android app compatibility for Sony Xperia XA2 variants is soon ready to be published and we will start delivering it via Jolla Store at the end of January. The initial version will be a public beta.

Bug Fixes

As always, we want to thank our community for your continuous support and help! Some bugs that were fixed include; Contacts disappearing if google account sync failed, and graphic glitches, just to name a few.

For more information please read the release notes and for detailed instructions on how to update your Sailfish OS powered device please check out here.

Cheers,
James

The post Sailfish OS Sipoonkorpi is now available appeared first on Jolla Blog.

17:00

Migrating Kafka's Zookeeper With No Downtime [Yelp Engineering and Product Blog]

Here at Yelp we use Kafka extensively. In fact, we send billions of messages a day through our various clusters. Behind the scenes, Kafka uses Zookeeper for various distributed coordination tasks, such as deciding which Kafka broker is in charge of assigning partition leaders and storing metadata about the topics in its brokers. Kafka’s success within Yelp has also meant that our clusters have grown substantially from when they were first deployed. At the same time, our other heavy Zookeeper users (e.g., Smartstack and PaasTA) have increased in scale, putting more load on our shared Zookeeper clusters. To alleviate this situation, we...

01:00

Fedora Classroom: Getting started with L10N [Fedora Magazine]

Fedora Classroom sessions continue with an introductory session on Fedora Localization (L10N). The general schedule for sessions is available on the wiki, along with resources and recordings from previous sessions. Read on for more details about the upcoming L10N Classroom session next week.

Topic: Getting Started with L10N

The goal of the Fedora Localization Project (FLP) is to bring everything around Fedora (the Software, Documentation, Websites, and culture) closer to local communities (countries, languages and in general cultural groups).  The session is aimed at beginners. Here is the agenda:

  • What is L10N?
  • Difference between Translation and Localization
  • Overview: How does L10N work?
  • Fedora structure and peculiarities related to L10N
  • Ways to join, help, and contribute
  • Further information with references and links

When and where

Instructor

Silvia Sánchez has been a Fedora community member for a number of years. She currently focuses her contributions on QA, translation, wiki editing, and the Ambassadors teams among others. She has a varied background, having studied systems, programming, design, and photography. She speaks, reads, and writes Spanish, English, and German and further, also reads Portuguese, French, and Italian. In her free time, Silvia enjoys forest walks, art, and writing fiction.

Tuesday, 15 January

07:03

Why Diversity Is Important to Etsy [Code as Craft]

We recently published our company’s Guiding Principles. These are five common guideposts that apply to all organizations and departments within Etsy. We spent a great deal of time discussing, brainstorming, and editing these. By one estimate, over 30% of the company had some input at some phase of the process. This was a lot of effort by a lot of people but this was important work. These principles need to not only reflect how we currently act but at the same time they need to be aspirational for how we want to behave. These principles will be used in performance assessments, competency matrices, interview rubrics, career discussions, and in everyday meetings to refocus discussions.

One of the five principles is focused on diversity and inclusion. The principle states:

We embrace differences.

Diverse teams are stronger, and inclusive cultures are more resilient. When we seek out different perspectives, we make better decisions and build better products.

Why would we include diversity and inclusion as one of our top five guiding principles? One reason is that Etsy’s mission is to Keep Commerce Human. Etsy is a very mission-driven company. Many of our employees joined and remain with us because they feel so passionate about the mission. Every day, we keep commerce human by helping creative entrepreneurs find buyers who become committed fans of the seller’s art, crafts, and collections. The sellers themselves are a diverse group of individuals from almost every country in the world. We would have a hard time coming to work if the way we work, the way we develop products, the way we provide support, etc. isn’t done in a manner that supports this mission. Failing to be diverse and inclusive would fail that mission.

Besides aligning with our mission, there are other reasons that we want to have diverse teams. Complicated systems, which feature unpredictable, surprising, and unexpected behaviors have always existed. Complex systems, however, have gone from something found mainly in large systems, such as cities, to almost everything we interact with today. Complex systems are far more difficult to manage than merely complicated ones as subsystems interact in unexpected ways making it harder to predict what will happen. Our engineers deal with complex systems on a daily basis. Complexity is a bit of an overloaded term, but scholarly literature generally categorizes it into three major groups, determined according to the point of view of the observer: behavioral, structural, and constructive.1 Between the website, mobile apps, and systems that support development, our engineers interact with highly complex systems from all three perspectives every day. Research has consistently shown that diverse teams are better able to manage complex systems.2

We recently invited Chris Clearfield and András Tilcsik, the authors of Meltdown (Penguin Canada, 2018), to speak with our engineering teams. The book and their talk contained many interesting topics, most based on Charles Perrow’s book, Normal Accident Theory (Princeton University Press; revised ed. 1999). However, perhaps the most important topic was based on a series of studies performed by Evan Apfelbaum and his colleagues at MIT. This study revealed that as much as we’re predisposed to agree with a group, our willingness to disagree increases dramatically if the group is diverse.3 According to Clearfield and Tilcsik, homogeneity may facilitate “smooth, effortless interactions,” but diversity drives better decisions. Interestingly, it’s the diversity and not necessarily the specific contributions of the individuals themselves, that causes greater skepticism, more open and active dialogue, and less group-think. This healthy skepticism is incredibly useful in a myriad of situations. One such situation is during pre-mortems, where a project team imagines that a project has failed and works to identify what potentially could lead to such an outcome. This is very different from a postmortem where the failure has already occurred and the team is dissecting the failure. Often individuals who have been working on projects for weeks or more are biased with overconfidence and the planning fallacy. This exercise can help ameliorate these biases and especially when diverse team members participate. We firmly believe that when we seek out different perspectives, we make better decisions, build better products, and manage complex systems better.

Etsy Engineering is also incredibly innovative. One measure of that is the number of open source projects on our GitHub page and the continuing flow of contributions from our engineers in the open source community. We are of course big fans of open source as Etsy, like most modern platforms, wouldn’t exist in its current form without the myriad of people who have solved a problem and published their code under an open source license. But we also view this responsibility to give back as part of our culture. Part of everyone’s job at Etsy is making others better. It has at times been referred to as “generosity of spirit”, which to engineers means that we should be mentoring, teaching, contributing, speaking, writing, etc.  

Another measure of our innovation is our experiment velocity. We often run dozens of simultaneous experiments in order to improve the buyer and seller experiences. Under the mission of keeping commerce human, we strive every day to develop and improve products that enable 37M buyers to search and browse through 50M+ items to find just the right, special piece. As you can imagine, this takes some seriously advanced technologies to work effectively at this scale. And, to get that correct we need to experiment rapidly to see what works and what doesn’t. Fueling this innovation is the diversity of our workforce.

Companies with increased diversity unlock innovation by creating an environment where ideas are heard and employees can find senior-level sponsorship for compelling ideas. Leaders are twice as likely to unleash value-driving insights if they give diverse voices equal opportunity.4

So diversity fits our mission, helps manage complex systems, and drives greater innovation, but how is Etsy doing with respect to diversity? More than 50% of our Executive Team and half of our Board of Directors are women. More than 30% of Etsy Engineers identify as women/non-binary and more than 30% are people of color.5 These numbers are industry-leading, especially when compared to other tech companies who report “tech roles” and not the more narrow category, “engineering” roles. Even though we’re proud of our progress, we’re not fully satisfied. In October 2017, we announced a diversity impact goal to “meaningfully increase representation of underrepresented groups and ensure equity in Etsy’s workforce.” To advance our goal, we are focused on recruiting, hiring, retention, employee development, mentorship, sponsorship, and building an inclusive culture.

We have been working diligently on our recruiting and hiring processes. We’ve rewritten job descriptions, replaced some manual steps in the process with third-party vendors, and changed the order of steps in the interview process, all in an effort to recruit and hire the very best engineers without bias. We have also allocated funding and people in order to sponsor and attend conferences focused on underrepresented groups in tech. We’ll share our 2018 progress in Q1 2019.

Once engineers are onboard, we want them to bring their whole selves to work in an inclusive environment that allows them to thrive and be their best. One thing that we do to help with this is to promote and partner directly with employee resource groups (ERGs). Our ERGs include Asian Resource Community, Black Resource and Identity Group at Etsy, Jewish People at Etsy, Hispanic Latinx Network, Parents ERG, Queer@Etsy, and Women and NonBinary People in Tech. If you’re not familiar with ERGs, their mission and goals are to create a positive and inclusive workplace culture where employees from underrepresented backgrounds, lifestyles, and abilities have access to programs that foster a sense of community, contribute to professional development, and amplify diverse voices within our organization. Each of these ERGs has an executive sponsor. This ensures that there is a communication channel with upper management. It also highlights the value that we place upon the support that these groups provide.    

We are also focused on retaining our engineers. One of the things that we do to help in this area is to monitor for discrepancies that might indicate bias. During our compensation, assessment, and promotion cycles, we evaluate for inconsistencies. We perform this analysis both internally and through the use of third parties.  

Etsy Engineering has been a leader and innovator in the broader tech industry with regard to technology and process. We also want to be leaders in the industry with regards to diversity and inclusion. It is not only the right thing to do but it’s the right thing to do for our business. If this sounds exciting to you, we’d love to talk, just click here to learn more.

 

Endnotes:

1 Wade, J., & Heydari, B. (2014). Complexity: Definition and reduction techniques. In Proceedings of the Poster Workshop at the 2014 Complex Systems Design & Management International Conference.
2 Sargut, G., & McGrath, R. G. (2011). Learning to live with complexity. Harvard Business Review, 89(9), 68–76
3 Apfelbaum EP, Phillips KW, Richeson JA (2014) Rethinking the baseline in diversity research: Should we be explaining the effects of homogeneity? Perspect Psychol Sci 9(3):235–244.
4 Hewlett, S. A., Marshall, M., & Sherbin, L. (2013). How diversity can drive innovation. Harvard Business Review.
5 Etsy Impact Update (August 2018). https://extfiles.etsy.com/Impact/2017EtsyImpactUpdate.pdf

Monday, 14 January

11:49

Contribute at the Fedora Test Day for kernel 4.20 [Fedora Magazine]

The kernel team is working on final integration for kernel 4.20. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test day for Tuesday, January 15, 2019. Refer to the wiki page for links to the test images you’ll need to participate.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.


01:00

How to Build a Netboot Server, Part 4 [Fedora Magazine]

One significant limitation of the netboot server built in this series is the operating system image being served is read-only. Some use cases may require the end user to modify the image. For example, an instructor may want to have the students install and configure software packages like MariaDB and Node.js as part of their course walk-through.

An added benefit of writable netboot images is the end user’s “personalized” operating system can follow them to different workstations they may use at later times.

Change the Bootmenu Application to use HTTPS

Create a self-signed certificate for the bootmenu application:

$ sudo -i
# MY_NAME=$(</etc/hostname)
# MY_TLSD=/opt/bootmenu/tls
# mkdir $MY_TLSD
# openssl req -newkey rsa:2048 -nodes -keyout $MY_TLSD/$MY_NAME.key -x509 -days 3650 -out $MY_TLSD/$MY_NAME.pem

Verify your certificate’s values. Make sure the “CN” value in the “Subject” line matches the DNS name that your iPXE clients use to connect to your bootmenu server:

# openssl x509 -text -noout -in $MY_TLSD/$MY_NAME.pem

Next, update the bootmenu application’s listen directive to use the HTTPS port and the newly created certificate and key:

# sed -i "s#listen => .*#listen => ['https://$MY_NAME:443?cert=$MY_TLSD/$MY_NAME.pem\&key=$MY_TLSD/$MY_NAME.key\&ciphers=AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA'],#" /opt/bootmenu/bootmenu.conf

Note the ciphers have been restricted to those currently supported by iPXE.

GnuTLS requires the “CAP_DAC_READ_SEARCH” capability, so add it to the bootmenu application’s systemd service:

# sed -i '/^AmbientCapabilities=/ s/$/ CAP_DAC_READ_SEARCH/' /etc/systemd/system/bootmenu.service
# sed -i 's/Serves iPXE Menus over HTTP/Serves iPXE Menus over HTTPS/' /etc/systemd/system/bootmenu.service
# systemctl daemon-reload

Now, add an exception for the bootmenu service to the firewall and restart the service:

# MY_SUBNET=192.0.2.0
# MY_PREFIX=24
# firewall-cmd --add-rich-rule="rule family='ipv4' source address='$MY_SUBNET/$MY_PREFIX' service name='https' accept"
# firewall-cmd --runtime-to-permanent
# systemctl restart bootmenu.service

Use wget to verify it’s working:

$ MY_NAME=server-01.example.edu
$ MY_TLSD=/opt/bootmenu/tls
$ wget -q --ca-certificate=$MY_TLSD/$MY_NAME.pem -O - https://$MY_NAME/menu

Add HTTPS to iPXE

Update init.ipxe to use HTTPS. Then recompile the ipxe bootloader with options to embed and trust the self-signed certificate you created for the bootmenu application:

$ echo '#define DOWNLOAD_PROTO_HTTPS' >> $HOME/ipxe/src/config/local/general.h
$ sed -i 's/^chain http:/chain https:/' $HOME/ipxe/init.ipxe
$ cp $MY_TLSD/$MY_NAME.pem $HOME/ipxe
$ cd $HOME/ipxe/src
$ make clean
$ make bin-x86_64-efi/ipxe.efi EMBED=../init.ipxe CERT="../$MY_NAME.pem" TRUST="../$MY_NAME.pem"

You can now copy the HTTPS-enabled iPXE bootloader out to your clients and test that everything is working correctly:

$ cp $HOME/ipxe/src/bin-x86_64-efi/ipxe.efi $HOME/esp/efi/boot/bootx64.efi

Add User Authentication to Mojolicious

Create a PAM service definition for the bootmenu application:

# dnf install -y pam_krb5
# echo 'auth required pam_krb5.so' > /etc/pam.d/bootmenu

Add a library to the bootmenu application that uses the Authen-PAM perl module to perform user authentication:

# dnf install -y perl-Authen-PAM;
# MY_MOJO=/opt/bootmenu
# mkdir $MY_MOJO/lib
# cat << 'END' > $MY_MOJO/lib/PAM.pm
package PAM;

use Authen::PAM;

sub auth {
   my $success = 0;

   my $username = shift;
   my $password = shift;

   my $callback = sub {
      my @res;
      while (@_) {
         my $code = shift;
         my $msg = shift;
         my $ans = "";
   
         $ans = $username if ($code == PAM_PROMPT_ECHO_ON());
         $ans = $password if ($code == PAM_PROMPT_ECHO_OFF());
   
         push @res, (PAM_SUCCESS(), $ans);
      }
      push @res, PAM_SUCCESS();

      return @res;
   };

   my $pamh = new Authen::PAM('bootmenu', $username, $callback);

   {
      last unless ref $pamh;
      last unless $pamh->pam_authenticate() == PAM_SUCCESS;
      $success = 1;
   }

   return $success;
}

return 1;
END

The above code is taken almost verbatim from the Authen::PAM::FAQ man page.

Redefine the bootmenu application so it returns a netboot template only if a valid username and password are supplied:

# cat << 'END' > $MY_MOJO/bootmenu.pl
#!/usr/bin/env perl

use lib 'lib';

use PAM;
use Mojolicious::Lite;
use Mojolicious::Plugins;
use Mojo::Util ('url_unescape');

plugin 'Config';

get '/menu';
get '/boot' => sub {
   my $c = shift;

   my $instance = $c->param('instance');
   my $username = $c->param('username');
   my $password = $c->param('password');

   my $template = 'menu';

   {
      last unless $instance =~ /^fc[[:digit:]]{2}$/;
      last unless $username =~ /^[[:alnum:]]+$/;
      last unless PAM::auth($username, url_unescape($password));
      $template = $instance;
   }

   return $c->render(template => $template);
};

app->start;
END

The bootmenu application now looks for the lib directory relative to its WorkingDirectory. However, by default the working directory is set to the root directory of the server for systemd units. Therefore, you must update the systemd unit to set WorkingDirectory to the root of the bootmenu application instead:

# sed -i "/^RuntimeDirectory=/ a WorkingDirectory=$MY_MOJO" /etc/systemd/system/bootmenu.service
# systemctl daemon-reload

Update the templates to work with the redefined bootmenu application:

# cd $MY_MOJO/templates
# MY_BOOTMENU_SERVER=$(</etc/hostname)
# MY_FEDORA_RELEASES="28 29"
# for i in $MY_FEDORA_RELEASES; do echo '#!ipxe' > fc$i.html.ep; grep "^kernel\|initrd" menu.html.ep | grep "fc$i" >> fc$i.html.ep; echo "boot || chain https://$MY_BOOTMENU_SERVER/menu" >> fc$i.html.ep; sed -i "/^:f$i$/,/^boot /c :f$i\nlogin\nchain https://$MY_BOOTMENU_SERVER/boot?instance=fc$i\&username=\${username}\&password=\${password:uristring} || goto failed" menu.html.ep; done

The result of the last command above should be three files similar to the following:

menu.html.ep:

#!ipxe

set timeout 5000

:menu
menu iPXE Boot Menu
item --key 1 lcl 1. Microsoft Windows 10
item --key 2 f29 2. RedHat Fedora 29
item --key 3 f28 3. RedHat Fedora 28
choose --timeout ${timeout} --default lcl selected || goto shell
set timeout 0
goto ${selected}

:failed
echo boot failed, dropping to shell...
goto shell

:shell
echo type 'exit' to get the back to the menu
set timeout 0
shell
goto menu

:lcl
exit

:f29
login
chain https://server-01.example.edu/boot?instance=fc29&username=${username}&password=${password:uristring} || goto failed

:f28
login
chain https://server-01.example.edu/boot?instance=fc28&username=${username}&password=${password:uristring} || goto failed

fc29.html.ep:

#!ipxe
kernel --name kernel.efi ${prefix}/vmlinuz-4.19.5-300.fc29.x86_64 initrd=initrd.img ro ip=dhcp rd.peerdns=0 nameserver=192.0.2.91 nameserver=192.0.2.92 root=/dev/disk/by-path/ip-192.0.2.158:3260-iscsi-iqn.edu.example.server-01:fc29-lun-1 netroot=iscsi:192.0.2.158::::iqn.edu.example.server-01:fc29 console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
initrd --name initrd.img ${prefix}/initramfs-4.19.5-300.fc29.x86_64.img
boot || chain https://server-01.example.edu/menu

fc28.html.ep:

#!ipxe
kernel --name kernel.efi ${prefix}/vmlinuz-4.19.3-200.fc28.x86_64 initrd=initrd.img ro ip=dhcp rd.peerdns=0 nameserver=192.0.2.91 nameserver=192.0.2.92 root=/dev/disk/by-path/ip-192.0.2.158:3260-iscsi-iqn.edu.example.server-01:fc28-lun-1 netroot=iscsi:192.0.2.158::::iqn.edu.example.server-01:fc28 console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
initrd --name initrd.img ${prefix}/initramfs-4.19.3-200.fc28.x86_64.img
boot || chain https://server-01.example.edu/menu

Now, restart the bootmenu application and verify authentication is working:

# systemctl restart bootmenu.service

Make the iSCSI Target Writeable

Now that user authentication works through iPXE, you can create per-user, writeable overlays on top of the read-only image on demand when users connect. Using a copy-on-write overlay has three advantages over simply copying the original image file for each user:

  1. The copy can be created very quickly. This allows creation on-demand.
  2. The copy does not increase the disk usage on the server. Only what the user writes to their personal copy of the image is stored in addition to the original image.
  3. Since most sectors for each copy are the same sectors on the server’s storage, they’ll likely already be loaded in RAM when subsequent users access their copies of the operating system. This improves the server’s performance because RAM is faster than disk I/O.

One potential pitfall of using copy-on-write is that once overlays are created, the images on which they are overlayed must not be changed. If they are changed, all the overlays will be corrupted. Then the overlays must be deleted and replaced with new, blank overlays. Even simply mounting the image file in read-write mode can cause sufficient filesystem updates to corrupt the overlays.

Due to the potential for the overlays to be corrupted if the original image is modified, mark the original image as immutable by running:

# chattr +i </path/to/file>

You can use lsattr </path/to/file> to view the status of the immutable flag and use  to chattr -i </path/to/file> unset the immutable flag. While the immutable flag is set, even the root user or a system process running as root cannot modify or delete the file.

Begin by stopping the tgtd.service so you can change the image files:

# systemctl stop tgtd.service

It’s normal for this command to take a minute or so to stop when there are connections still open.

Now, remove the read-only iSCSI export. Then update the readonly-root configuration file in the template so the image is no longer read-only:

# MY_FC=fc29
# rm -f /etc/tgt/conf.d/$MY_FC.conf
# TEMP_MNT=$(mktemp -d)
# mount /$MY_FC.img $TEMP_MNT
# sed -i 's/^READONLY=yes$/READONLY=no/' $TEMP_MNT/etc/sysconfig/readonly-root
# sed -i 's/^Storage=volatile$/#Storage=auto/' $TEMP_MNT/etc/systemd/journald.conf
# umount $TEMP_MNT

Journald was changed from logging to volatile memory back to its default (log to disk if /var/log/journal exists) because a user reported his clients would freeze with an out-of-memory error due to an application generating excessive system logs. The downside to setting logging to disk is that extra write traffic is generated by the clients, and might burden your netboot server with unnecessary I/O. You should decide which option — log to memory or log to disk — is preferable depending on your environment.

Since you won’t make any further changes to the template image, set the immutable flag on it and restart the tgtd.service:

# chattr +i /$MY_FC.img
# systemctl start tgtd.service

Now, update the bootmenu application:

# cat << 'END' > $MY_MOJO/bootmenu.pl
#!/usr/bin/env perl

use lib 'lib';

use PAM;
use Mojolicious::Lite;
use Mojolicious::Plugins;
use Mojo::Util ('url_unescape');

plugin 'Config';

get '/menu';
get '/boot' => sub {
   my $c = shift;

   my $instance = $c->param('instance');
   my $username = $c->param('username');
   my $password = $c->param('password');

   my $chapscrt;
   my $template = 'menu';

   {
      last unless $instance =~ /^fc[[:digit:]]{2}$/;
      last unless $username =~ /^[[:alnum:]]+$/;
      last unless PAM::auth($username, url_unescape($password));
      last unless $chapscrt = `sudo scripts/mktgt $instance $username`;
      $template = $instance;
   }

   return $c->render(template => $template, username => $username, chapscrt => $chapscrt);
};

app->start;
END

This new version of the bootmenu application calls a custom mktgt script which, on success, returns a random CHAP password for each new iSCSI target that it creates. The CHAP password prevents one user from mounting another user’s iSCSI target by indirect means. The app only returns the correct iSCSI target password to a user who has successfully authenticated.

The mktgt script is prefixed with sudo because it needs root privileges to create the target.

The $username and $chapscrt variables also pass to the render command so they can be incorporated into the templates returned to the user when necessary.

Next, update our boot templates so they can read the username and chapscrt variables and pass them along to the end user. Also update the templates to mount the root filesystem in rw (read-write) mode:

# cd $MY_MOJO/templates
# sed -i "s/:$MY_FC/:$MY_FC-<%= \$username %>/g" $MY_FC.html.ep
# sed -i "s/ netroot=iscsi:/ netroot=iscsi:<%= \$username %>:<%= \$chapscrt %>@/" $MY_FC.html.ep
# sed -i "s/ ro / rw /" $MY_FC.html.ep

After running the above commands, you should have boot templates like the following:

#!ipxe
kernel --name kernel.efi ${prefix}/vmlinuz-4.19.5-300.fc29.x86_64 initrd=initrd.img rw ip=dhcp rd.peerdns=0 nameserver=192.0.2.91 nameserver=192.0.2.92 root=/dev/disk/by-path/ip-192.0.2.158:3260-iscsi-iqn.edu.example.server-01:fc29-<%= $username %>-lun-1 netroot=iscsi:<%= $username %>:<%= $chapscrt %>@192.0.2.158::::iqn.edu.example.server-01:fc29-<%= $username %> console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
initrd --name initrd.img ${prefix}/initramfs-4.19.5-300.fc29.x86_64.img
boot || chain https://server-01.example.edu/menu

NOTE: If you need to view the boot template after the variables have been interpolated, you can insert the “shell” command on its own line just before the “boot” command. Then, when you netboot your client, iPXE gives you an interactive shell where you can enter “imgstat” to view the parameters being passed to the kernel. If everything looks correct, you can type “exit” to leave the shell and continue the boot process.

Now allow the bootmenu user to run the mktgt script (and only that script) as root via sudo:

# echo "bootmenu ALL = NOPASSWD: $MY_MOJO/scripts/mktgt *" > /etc/sudoers.d/bootmenu

The bootmenu user should not have write access to the mktgt script or any other files under its home directory. All the files under /opt/bootmenu should be owned by root, and should not be writable by any user other than root.

Sudo does not work well with systemd’s DynamicUser option, so create a normal user account and set the systemd service to run as that user:

# useradd -r -c 'iPXE Boot Menu Service' -d /opt/bootmenu -s /sbin/nologin bootmenu
# sed -i 's/^DynamicUser=true$/User=bootmenu/' /etc/systemd/system/bootmenu.service
# systemctl daemon-reload

Finally, create a directory for the copy-on-write overlays and create the mktgt script that manages the iSCSI targets and their overlayed backing stores:

# mkdir /$MY_FC.cow
# mkdir $MY_MOJO/scripts
# cat << 'END' > $MY_MOJO/scripts/mktgt
#!/usr/bin/env perl

# if another instance of this script is running, wait for it to finish
"$ENV{FLOCKER}" eq 'MKTGT' or exec "env FLOCKER=MKTGT flock /tmp $0 @ARGV";

# use "RETURN" to print to STDOUT; everything else goes to STDERR by default
open(RETURN, '>&', STDOUT);
open(STDOUT, '>&', STDERR);

my $instance = shift or die "instance not provided";
my $username = shift or die "username not provided";

my $img = "/$instance.img";
my $dir = "/$instance.cow";
my $top = "$dir/$username";

-f "$img" or die "'$img' is not a file"; 
-d "$dir" or die "'$dir' is not a directory";

my $base;
die unless $base = `losetup --show --read-only --nooverlap --find $img`;
chomp $base;

my $size;
die unless $size = `blockdev --getsz $base`;
chomp $size;

# create the per-user sparse file if it does not exist
if (! -e "$top") {
   die unless system("dd if=/dev/zero of=$top status=none bs=512 count=0 seek=$size") == 0;
}

# create the copy-on-write overlay if it does not exist
my $cow="$instance-$username";
my $dev="/dev/mapper/$cow";
if (! -e "$dev") {
   my $over;
   die unless $over = `losetup --show --nooverlap --find $top`;
   chomp $over;
   die unless system("echo 0 $size snapshot $base $over p 8 | dmsetup create $cow") == 0;
}

my $tgtadm = '/usr/sbin/tgtadm --lld iscsi';

# get textual representations of the iscsi targets
my $text = `$tgtadm --op show --mode target`;
my @targets = $text =~ /(?:^T.*\n)(?:^ .*\n)*/mg;

# convert the textual representations into a hash table
my $targets = {};
foreach (@targets) {
   my $tgt;
   my $sid;

   foreach (split /\n/) {
      /^Target (\d+)(?{ $tgt = $targets->{$^N} = [] })/;
      /I_T nexus: (\d+)(?{ $sid = $^N })/;
      /Connection: (\d+)(?{ push @{$tgt}, [ $sid, $^N ] })/;
   }
}

my $hostname;
die unless $hostname = `hostname`;
chomp $hostname;

my $target = 'iqn.' . join('.', reverse split('\.', $hostname)) . ":$cow";

# find the target id corresponding to the provided target name and
# close any existing connections to it
my $tid = 0;
foreach (@targets) {
   next unless /^Target (\d+)(?{ $tid = $^N }): $target$/m;
   foreach (@{$targets->{$tid}}) {
      die unless system("$tgtadm --op delete --mode conn --tid $tid --sid $_->[0] --cid $_->[1]") == 0;
   }
}

# create a new target if an existing one was not found
if ($tid == 0) {
   # find an available target id
   my @ids = (0, sort keys %{$targets});
   $tid = 1; while ($ids[$tid]==$tid) { $tid++ }

   # create the target
   die unless -e "$dev";
   die unless system("$tgtadm --op new --mode target --tid $tid --targetname $target") == 0;
   die unless system("$tgtadm --op new --mode logicalunit --tid $tid --lun 1 --backing-store $dev") == 0;
   die unless system("$tgtadm --op bind --mode target --tid $tid --initiator-address ALL") == 0;
}

# (re)set the provided target's chap password
my $password = join('', map(chr(int(rand(26))+65), 1..8));
my $accounts = `$tgtadm --op show --mode account`;
if ($accounts =~ / $username$/m) {
   die unless system("$tgtadm --op delete --mode account --user $username") == 0;
}
die unless system("$tgtadm --op new --mode account --user $username --password $password") == 0;
die unless system("$tgtadm --op bind --mode account --tid $tid --user $username") == 0;

# return the new password to the iscsi target on stdout
print RETURN $password;
END
# chmod +x $MY_MOJO/scripts/mktgt

The above script does five things:

  1. It creates the /<instance>.cow/<username> sparse file if it does not already exist.
  2. It creates the /dev/mapper/<instance>-<username> device node that serves as the copy-on-write backing store for the iSCSI target if it does not already exist.
  3. It creates the iqn.<reverse-hostname>:<instance>-<username> iSCSI target if it does not exist. Or, if the target does exist, it closes any existing connections to it because the image can only be opened in read-write mode from one place at a time.
  4. It (re)sets the chap password on the iqn.<reverse-hostname>:<instance>-<username> iSCSI target to a new random value.
  5. It prints the new chap password on standard output if all of the previous tasks compeleted successfully.

You should be able to test the mktgt script from the command line by running it with valid test parameters. For example:

# echo `$MY_MOJO/scripts/mktgt fc29 jsmith`

When run from the command line, the mktgt script should print out either the eight-character random password for the iSCSI target if it succeeded or the line number on which something went wrong if it failed.

On occasion, you may want to delete an iSCSI target without having to stop the entire service. For example, a user might inadvertently corrupt their personal image, in which case you would need to systematically undo everything that the above mktgt script does so that the next time they log in they will get a copy of the original image.

Below is an rmtgt script that undoes, in reverse order, what the above mktgt script did:

# mkdir $HOME/bin
# cat << 'END' > $HOME/bin/rmtgt
#!/usr/bin/env perl

@ARGV >= 2 or die "usage: $0 <instance> <username> [+d|+f]\n";

my $instance = shift;
my $username = shift;

my $rmd = ($ARGV[0] eq '+d'); #remove device node if +d flag is set
my $rmf = ($ARGV[0] eq '+f'); #remove sparse file if +f flag is set
my $cow = "$instance-$username";

my $hostname;
die unless $hostname = `hostname`;
chomp $hostname;

my $tgtadm = '/usr/sbin/tgtadm';
my $target = 'iqn.' . join('.', reverse split('\.', $hostname)) . ":$cow";

my $text = `$tgtadm --op show --mode target`;
my @targets = $text =~ /(?:^T.*\n)(?:^ .*\n)*/mg;

my $targets = {};
foreach (@targets) {
   my $tgt;
   my $sid;

   foreach (split /\n/) {
      /^Target (\d+)(?{ $tgt = $targets->{$^N} = [] })/;
      /I_T nexus: (\d+)(?{ $sid = $^N })/;
      /Connection: (\d+)(?{ push @{$tgt}, [ $sid, $^N ] })/;
   }
}

my $tid = 0;
foreach (@targets) {
   next unless /^Target (\d+)(?{ $tid = $^N }): $target$/m;
   foreach (@{$targets->{$tid}}) {
      die unless system("$tgtadm --op delete --mode conn --tid $tid --sid $_->[0] --cid $_->[1]") == 0;
   }
   die unless system("$tgtadm --op delete --mode target --tid $tid") == 0;
   print "target $tid deleted\n";
   sleep 1;
}

my $dev = "/dev/mapper/$cow";
if ($rmd or ($rmf and -e $dev)) {
   die unless system("dmsetup remove $cow") == 0;
   print "device node $dev deleted\n";
}

if ($rmf) {
   my $sf = "/$instance.cow/$username";
   die "sparse file $sf not found" unless -e "$sf";
   die unless system("rm -f $sf") == 0;
   die unless not -e "$sf";
   print "sparse file $sf deleted\n";
}
END
# chmod +x $HOME/bin/rmtgt

For example, to use the above script to completely remove the fc29-jsmith target including its backing store device node and its sparse file, run the following:

# rmtgt fc29 jsmith +f

Once you’ve verified that the mktgt script is working properly, you can restart the bootmenu service. The next time someone netboots, they should receive a personal copy of the the netboot image they can write to:

# systemctl restart bootmenu.service

Users should now be able to modify the root filesystem as demonstrated in the below screenshot:

Sunday, 13 January

Thursday, 10 January

Tuesday, 08 January

Monday, 07 January

10:00

Never mind killer robots—here are six real AI dangers to watch out for in 2019 [Top News - MIT Technology Review]

Last year a string of controversies revealed a darker (and dumber) side to artificial intelligence.