New York (CNN Business)Nintendo revealed on Friday that 160,000 accounts were breached since the beginning of April, by hackers using others’ Nintendo Network IDs without permission. The company announced users will no longer need to use these IDs to log into their accounts, and that passwords on accounts that may have been breached will be reset.
Security startups to the rescue.
As we continue to ride out the pandemic, security experts are closely monitoring the surge of coronavirus-related cyber threats. Just this week, Google’s Threat Analysis Group, its elite threat hunting unit, says that while the overall number of threats remains largely the same, opportunistic hackers are retooling their efforts to piggyback on coronavirus.
Some startups are downsizing and laying off staff, but several cybersecurity startups are faring better, thanks to an uptick in demand for security protections. As the world continues to pivot toward working from home, it has blown up key cybersecurity verticals in ways we never expected. To wit, identity startups are needed more than ever to make sure only remote employees are getting access to corporate systems.
Can the startups take on the giants at their own game?
THE BIG PICTURE
Another payments processor drops the security ball
For the third time this year, a payments processor has admitted to a security lapse. First it was Cornerstone, then it was nCourt. This time it’s Paay, a New York-based card payment processor startup that left a database on the internet unprotected and without a password. Worse, the data was storing full, plaintext credit card numbers.
Anyone who knew where to look could have accessed the data. Luckily, a security researcher found it and reported it to TechCrunch. We alerted the company; it quickly took the data offline, but Paay denied that the data stored full credit card numbers. We even sent the co-founder a portion of the data showing card numbers stored in plaintext, but he did not respond to our follow-up.
Later this summer, physicists at the Argonne and Fermi national laboratories will exchange quantum information across 30 miles of optical fiber running beneath the suburbs of Chicago. One lab will generate a pair of entangled photons—particles that have identical states and are linked in such a way that what happens to one happens to the other—and send them to their colleagues at the other lab, who will extract the quantum information carried by these particles of light. By establishing this two-way link, the labs will become the first nodes in what the researchers hope will one day be a quantum internet linking quantum computers around the nation.
A quantum web is loaded with potential. It would enable ultra-secure data transmission through quantum encryption. Astronomers could study distant galaxies in unprecedented detail by combining the rare intergalactic photons collected by individual optical telescopes to create a distributed superscope. Linking small quantum computers could create a quantum cloud and rapidly scale our computing abilities. The problem is that quantum information hates long-distance travel. Send entangled photons out into the real world through optical fiber and, in less than 50 miles, environmental interference will destroy their quantum state. But if the photons were relayed through a satellite instead, they could be sent to destinations hundreds—and potentially thousands—of miles away. So in 2018, NASA partnered with MIT’s Lincoln Laboratory to develop the technologies needed to make it happen.
The goal of the National Space Quantum Laboratory program, sometimes referred to as Quantum Technology in Space, is to use a laser system on the International Space Station to exchange quantum information between two devices on Earth without a physical link. The refrigerator-sized module would be attached to the outside of the space station and would generate the entangled photons that carry the quantum information to Earth. The demonstration would pave the way for a satellite that could take entangled particles generated in local quantum networks and send them to far-flung locations.
“In the future, we will likely see quantum information from Argonne routed through a sequence of satellites to another location across the country, or the world,” says David Awschalom, a senior scientist and the quantum group leader at Argonne National Laboratory. “Much like with existing telecommunications, developing a global quantum network may involve a combination of space- and ground-based platforms.”
NASA is not the first to take quantum technologies to space. In 2016 China launched a satellite that sent a pair of entangled photons to two cities more than 700 miles apart. It was a critical test for long-distance quantum key distribution, which uses particles to encrypt information in a way that is almost impossible to break. It demonstrated that entangled particles could survive the journey from space to Earth by randomly sending photons to two ground stations and comparing when they arrived. If two photons arrived at the same time, they must have been entangled.
It was a groundbreaking demonstration, but “you can’t use that to generate a quantum network, because the photons are arriving at random times, and it wasn’t sending any quantum information,” says Scott Hamilton, who leads the Optical Communications Technology group at MIT’s Lincoln Lab. In this sense, what NASA is pursuing is totally different. The agency wants to use a technique called entanglement swapping to send quantum information carried by entangled particles from one node on the ground to another. This requires being able to send entangled photons with very precise timing and measure them without destroying the information they carry.
Entanglement is the source of many of the advantages of a quantum network, since it allows for information to be exchanged between two particles no matter how far apart they happen to be—what Einstein famously called “spooky action at a distance.” These particles are typically photons, which can be thought of as the envelopes carrying letters full of quantum information. But this information is notoriously delicate. Too much interference from the outside world will cause the information in the quantum missives to disappear like vanishing ink.
Typically, entangled photons are generated from a single source. A laser is fired at a special kind of crystal, and two identical photons pop out; one copy stays with the sender, the other goes to the receiver. The problem is that entangled photons can’t be amplified as they travel from sender to receiver, which limits how far they can travel before the information they carry is destroyed. Entanglement swapping is the art of entangling photons generated from two different sources, which allows the photons to be passed from node to node in a network similar to how a repeater relays optical or radio signals in a classical system.
“Entanglement swapping is a necessity to propagate entanglement over large distances,” says Babak Saif, an optical physicist at NASA’s Goddard Flight Center. “It’s the first step toward a quantum internet.”
In NASA’s system, a pair of entangled photons is generated on the International Space Station and another pair of entangled photons is generated at a ground station on Earth. One of the photons from space and one of the photons generated on Earth are sent to a quantum device that performs a bell measurement, which determines the state of each photon. This simultaneous measurement causes the remaining photons from their respective pairs—the one in space and the other on Earth—to become entangled, despite being generated by different sources. The next step is to send the remaining photon in space to a different ground station on Earth and repeat the process. This entangles the photons at each ground station and establishes a connection between the two quantum devices without a physical link.
It all sounds good in theory, but Saif says just getting the timing right is a major challenge. Entanglement swapping requires both photons—the one from space and the one from Earth—to arrive in the measurement system on Earth at the exact same time. Moreover, the photons need to be able to hit a small receiver with perfect accuracy. Achieving this level of precision from a spacecraft 250 miles away moving 17,000 miles per hour is every bit as hard as it sounds. To make it happen, NASA needs a damn good space laser.
NASA’s last major experiment in space laser communications was in 2013, when the agency sent data to and from a satellite orbiting the moon. The experiment was a huge success and allowed researchers to send data from the lunar satellite to Earth at over 600 megabits per second—that’s faster than the internet connections in most homes. But the lunar laser link wasn’t long for this world. Shortly after the experiment, NASA plowed the satellite into the moon so researchers could study the dust it kicked up on impact.
“Unfortunately, they crashed a perfectly good laser communication system on purpose,” says David Israel, the Exploration and Space Communications Projects Division architect at NASA’s Goddard Flight Center. But he says the experiment laid the groundwork for the Laser Communications Relay Demonstration (LCRD) satellite, which is scheduled to launch early next year. This new satellite will spend its first few years in orbit relaying laser communications from a ground station in California to one in Hawaii so Israel and his colleagues can study how the weather affects laser communications.
The long-term vision is to transition the satellite from an experiment to a data relay for future missions. Israel says its first operational user will be the ILLUMA-T experiment, an acronym so tortuous that I am not even going to spell it out here. ILLUMA-T is a laser communication station that is scheduled to be installed on the International Space Station in 2022 and will relay data through the LCRD satellite to the ground to experiment with laser cross-links in space. “The goal is to connect it to the onboard systems so that LCRD and ILLUMA-T are not so much experiments anymore, but another path to get data to and from the space station,” says Israel.
Together, ILLUMA-T and the LCRD satellite will lay the foundation for an optical communications network in space, which will enable the next generation of lunar explorers to send back high-definition video from the surface of the moon. But they will also be used as test beds to qualify the laser technologies needed for NASA’s quantum communication ambitions. “Since we were already building an optical thing for the space station, the idea was, why not go the extra mile and make it quantum enhanced?” says Nasser Barghouty, who leads the Quantum Sciences and Technology Group at NASA.
Hamilton and his colleagues at MIT Lincoln Lab are already building a tabletop prototype of the quantum systems that could be connected to ILLUMA-T. He says it will be used to demonstrate entanglement swapping on Earth and that a space-ready version could be ready within five years. But whether or not the system will ever be installed on the space station is an open question.
Earlier this year, Hamilton, Barghouty, and other quantum physicists gathered for a workshop at the University of California, Berkeley, to discuss the future of quantum communications at NASA. One of the main topics of discussion was whether to start with a quantum communication demo on the space station or proceed directly to a quantum communication satellite. While the space station is a useful test platform for advanced technologies, its low orbit means it can only see a relatively small portion of the Earth’s surface at a time. To establish a quantum link between locations that are thousands of miles apart requires a satellite orbiting higher than the ISS.
NASA’s plan to build a quantum satellite link is referred to as “Marconi 2.0,” a nod to the Italian inventor Guglielmo Marconi, who was the first to achieve a long-distance radio transmission. Barghouty says the main idea behind Marconi 2.0 is to establish a space-based quantum link between Europe and North America by the mid- to late-2020s. But the details are still being discussed. “Marconi 2.0 is not a specific mission, but a vaguely defined class of missions,” says Barghouty. “There are a lot of variations on the concept.”
Hamilton says he expects NASA will have a finalized road map for its quantum communication program in the next year or two. In the meantime, he and his colleagues are focused on building the technologies that will make the first long-distance quantum network possible. Although the exact form this network will take is still being discussed, one thing is for certain—the road to a quantum internet passes through space.
It was another week of social distancing or quarantine for most of the world, but Google published findings that it has seen 12 government-backed hacking groups undeterred by the pandemic and, in fact, trying to take advantage of those conditions for intelligence-gathering. Another report found that China, for one, has been busy during the pandemic hacking Uighurs’ iPhones in a recent months-long campaign.
We broke down how Apple and Google are using aggregate smartphone location data to visualize social distancing trends. And in an exclusive interview with WIRED, Federal Bureau of Investigation director Christopher Wray warned that domestic terrorism is a growing threat in the United States.
On top of all the other digital threats, researchers emphasized this week that so-called "zero-click" hacks that don't require any interaction from users to initiate may be more prevalent and varied than most people realize. Such attacks are difficult to detect with current tools.
And there's more. Every Saturday we round up the security and privacy stories that we didn’t break or report on in depth but think you should know about. Click on the headlines to read them, and stay safe out there.
On Wednesday, the video conferencing service Zoom announced a number of small but needed security improvements. As Zoom usage has increased during the pandemic, so has scrutiny on the service's security and privacy offerings. This week's announcement of incremental improvements is part of a 90-day plan the company announced to overhaul its practices. One change is that Zoom will now offer AES 256 encryption on all meetings, meaning data will be encrypted with a 256-bit key. Zoom previously used AES 128, a reasonable option, but a controversial one in Zoom's case, because the company claimed in documentation and marketing materials that it used AES 256 all along.
Facebook data from more then 267 million profiles is being sold on criminal dark web forums for £500, or about $618. The information doesn't include passwords, but does include details like users' full names, phone numbers, and Facebook IDs. Though such information can't be used to break into the accounts directly, it can fuel digital scams like phishing. Most of the trove seems to be the same as data found by researcher Bob Diachenko in an exposed cloud repository last month. Even after that bucket was taken down, though, a copy of the information plus an addition 42 million records popped up in a different repository.
A growing number of Nintendo users over the past few weeks had watched fraudsters take control of their accounts, and in many cases use saved credit cards or linked PayPal accounts to buy Nintendo games or currency for the popular game Fortnite. At the beginning of April, Nintendo encouraged users to turn on two-factor authentication to protect their accounts, but it had been unclear how hackers were breaking in. On Friday, the company confirmed that hackers had gained unauthorized access to accounts and announced it was discontinuing users' ability to log into their Nintendo Accounts using Nintendo Network IDs, from older Wii U and 3DS systems. Nintendo also says it will contact affected users about resetting passwords. On its US customer support page, the company writes, "While we continue to investigate, we would like to reassure users that there is currently no evidence pointing toward a breach of Nintendo’s databases, servers or services."
At a time when more transactions than ever are happening online, payments behemoth Stripe is announcing three new features to continue expanding its reach.
The company today announced that it will now offer card issuing services directly to businesses to let them in turn make credit cards for customers tailored to specific purposes. Alongside that, it’s going to expand the number of accepted local, large card networks to cut down some of the steps it takes to make transactions in international markets. And finally, it’s launching a “revenue optimization” feature that essentially will use Stripe’s AI algorithms to reassess and approve more flagged transactions that might have otherwise been rejected in the past.
Together the three features underscore how Stripe is continuing to scale up with more services around its core payment processing APIs, a significant step in the wake of last week announcing its biggest fundraise to date: $600 million at a $36 billion valuation.
The rollouts of the new products are specifically coming at a time when Stripe has seen a big boost in usage among some (but not all) of its customers, said John Collison, Stripe’s co-founder and president, in an interview. Instacart, which is providing grocery delivery at a time when many are living under stay-at-home orders, has seen transactions up by 300% in recent weeks. Another newer customer, Zoom, is also seeing business boom. Amazon, Stripe’s behemoth customer that Collison would not discuss in any specific terms except to confirm it’s a close partner, is also seeing extremely heavy usage.
But other Stripe users — for example, many of its sea of small business users — are seeing huge pressures, while still others, faced with no physical business, are just starting to approach e-commerce in earnest for the first time. Stripe’s idea is that the launches today can help it address all of these scenarios.
“What we’re seeing in the COVID-19 world is that the impact is not minor,” said Collison. “Online has always been steadily taking a share from offline, but now many [projected] years of that migration are happening in the space of a few weeks.”
Stripe is among those companies that have been very mum about when they might go public — a state of affairs that only become more set in recent times, given how the IPO market has all but dried up in the midst of a health pandemic and economic slump. That has meant very little transparency about how Stripe is run, whether it’s profitable and how much revenues it makes.
But Stripe did note last week that it had some $2 billion in cash and cash reserves, which at least speaks to a level of financial stability. And another hint of efficiency might be gleaned from today’s product news.
While these three new services don’t necessarily sound like they are connected to each other, what they have underpinning them is that they are all building on top of tech and services that Stripe has previously rolled out. This speaks to how, even as the company now handles some 250 million API requests daily, it’s keeping some lean practices in place in terms of how it invests and maximises engineering and business development resources.
The card issuing service, for example, is built on a card service that Stripe launched last year. Originally aimed at businesses to provide their employees with credit cards — for example to better manage their own work-related expenses, or to make transactions on behalf of the business — now businesses can use the card issuing platform to build out aspects of its customer-facing services.
For example, Stripe noted that the first customer, Zipcar, will now be placing credit cards in each of its vehicles, which drivers can use to fuel up the vehicles (that is, the cards can only be used to buy gas). Another example Collison gave for how these could be implemented would be in a food delivery service, for example for a Postmates delivery person to use the card to pay for the meal that a customer has already paid Postmates to pick up and deliver to them.
Collison noted that while other startups like Marqeta have built big businesses around innovative card issuing services, “this is the first time it’s being issued on a self-serving basis,” meaning companies that want to use these cards can now set this up more quickly as a “programmatic card” experience, akin to self-serve, programmatic ads online.
It seems also to be good news for investors. “Stripe Issuing is a big step forward,” said Alex Rampell, general partner at Andreessen Horowitz, in a statement. “Not just for the millions of businesses running on Stripe, but for credit cards as a fundamental technology. Businesses can now use an API to create and issue cards exactly when and where they need them, and they can do it in a few clicks, not a few months. As investors, we’re excited by all the potential new companies and business models that will emerge as a result.”
Meanwhile, the revenue “optimization” engine that Stripe is rolling out is built on the same machine learning algorithms that it originally built for Radar, its fraud prevention tool that originally launched in 2016 and was extended to larger enterprises in 2018. This makes a lot of sense, since oftentimes the reason transactions get rejected is because of the suspicion of fraud. Why it’s taken four years to extend that to improve how transactions are approved or rejected is not entirely clear, but Stripe estimates that it could enable a further $2.5 billion in transactions annually.
One reason why the revenue optimization may have taken some time to roll out was because while Stripe offers a very seamless, simple API for users, it’s doing a lot of complex work behind the scenes knitting together a lot of very fragmented payment flows between card issuers, banks, businesses, customers and more in order to make transactions possible.
The third product announcement speaks to how Stripe is simplifying a bit more of that. Now, it’s able to provide direct links into six big card networks — Visa, Mastercard, American Express, Discover, JCB and China Union Pay, which effectively covers the major card networks in North and Latin America, Southeast Asia and Europe. Previously, Stripe would have had to work with third parties to integrate acceptance of all of these networks in different regions, which would have cut into Stripe’s own margins and also given it less flexibility in terms of how it could handle the transaction data.
Launching the revenue optimization by being able to apply machine learning to the transaction data is one example of where and how it might be able to apply more innovative processes from now on.
While Stripe is mainly focused today on how to serve its wider customer base and to just help business continue to keep running, Collison noted that the COVID-19 pandemic has had a measurable impact on Stripe beyond just boosts in business for some of its customers.
The whole company has been working remotely for weeks, including its development team, making for challenging times in building and rolling out services.
And Stripe, along with others, is also in the early stages of piloting how it will play a role in issuing small business loans as part of the CARES Act, he said.
In addition to that, he noted that there has been an emergence of more medical and telehealth services using Stripe for payments.
Before now, many of those use cases had been blocked by the banks, he said, for reasons of the industries themselves being strictly regulated in terms of what kind of data could get passed across networks and the sensitive nature of the businesses themselves. He said that a lot of that has started to get unblocked in the current climate, and “the growth of telemedicine has been off the charts.”
Today marks the conclusion of a years-long saga that started when John Oliver did a segment on Net Neutrality that was so popular that it brought the FCC’s comment system to its knees. Two years later it is finally near addressing all the issues brought up in an investigation from the General Accountability Office.
The report covers numerous cybersecurity and IT issues, some of which the FCC addressed quickly, some not so quickly, and some it’s still working on.
“Today’s GAO report makes clear what we knew all along: the FCC’s system for collecting public input has problems,” Commissioner Jessica Rosenworcel told TechCrunch . “The agency needs to fully fix this mess because this is the way the FCC is supposed to take input from the public. But as this report demonstrates, we have real work to do.”
Here’s the basic timeline of events, which seem so long ago now:
- May 2017: John Oliver’s segment airs, and the next day the FCC claims it was hit by denial-of-service attacks that took down its comment system, ECFS. (In fact it was merely the sheer volume of people who wanted to share their opinion of the FCC’s plan to kill net neutrality.)
- July 2017: Despite calls for details, the FCC refuses to release any details on the cyberattack, despite Congressional demands, saying the threat was “ongoing.” (Its investigations had not in fact determined malicious intent and its official account was in doubt internally from the start.)
- August 2017: Congress calls for an independent investigation of the FCC’s claims and its comment system. (That’s the report released today. Also around this time another improbable “hack” was found to have (not) happened in 2014.)
- October 2017: FCC’s chief information officer, David Bray, who claimed the attacks took place both in 2017 and 2014, leaves the FCC.
- December 2017: The FCC votes along party lines to kill net neutrality.
- June 2018: A watchdog group acquires 1,300 pages of emails, which (though very heavily redacted) show that the DDoS claims were essentially false and known to be so.
- August 2018: The FCC finally admits that it was never hacked, and the next day its own internal report comes out showing that it really was just overwhelming interest from people wanting to be heard. Members of Congress accuse Chairman Ajit Pai of “dereliction of duty” in perpetuating this dangerously incorrect narrative.
Then it’s pretty quiet basically until today, when the report requested in 2017 was publicly released. A version with sensitive information (like exact software configurations and other technical information) was internally circulated in September, then revised for today’s release.
The final report is not much of a bombshell, since much of it has been telegraphed ahead of time. It’s a collection of criticisms of an outdated system with inadequate security and other failings that might have been directed at practically any federal agency, among which cybersecurity practices are notoriously poor.
Digits, a fintech startup hailing from the same team that built and sold Crashlytics to Twitter, is officially launching today after two years of development. It’s also announcing a $22 million Series B round of funding led by GV, as it makes its public debut.
While the company had been fairly quiet about product details while in stealth mode, it’s today unveiling its first product: a visual, machine learning-powered expense monitoring dashboard aimed at startups and small businesses.
The dashboard, called Digits for Expenses, helps business owners track how their company is spending money, by showing things like spend by category, by identifying vendors and recurring expenses and by offering real-time alerts, among other features.
Instead of requiring business owners to make a switch from their existing financial solutions, Digits connects with the accounting software, banks, payroll providers, financial packages, sources of revenue and credit cards the business already uses — like Xero, QuickBooks, NetSuite, Citi, Bank of America or Chase, for example.
At launch, the list includes more than 9,000 banks, with support for Xero and NetSuite coming soon.
After setup, Digits will then automatically analyze the company’s spend and visualize it, in real time.
While visualizations of data may be reminiscent of personal finance startup Mint, Digits’ web-based solution is more technical in nature and offers an expanded analysis of the data on hand. Plus, as a business solution, it has to offer features like security, permissioning and collaborative workflows, which results in a more sophisticated product.
Digits also uses machine learning technology to predictively categorize transactions as they happen and the software can alert users to anomalies — like suspicious activity or unexpectedly large transactions — in real time. Business owners can use the dashboard to find out things like how quickly expenses are growing, what the cash flow looks like, where costs can be trimmed, what services are being paid for on a recurring basis and more, and can search for transactions.
The software also supports the ability to comment on transactions, loop in a colleague to ask for clarification about a charge and upload missing receipts. Everything uses HTTPS along with TLS and certificates so data is encrypted between Digit’s services and at rest.
The original idea for Digits came from a problem that co-founders Wayne Chang and Jeff Seibert faced themselves when building Crashlytics. As they explained previously, their focus as entrepreneurs was on solving technical challenges, not on the operational side of running a business.
Many entrepreneurs also find themselves in this same space. They’re trying to solve a problem or crack a tough engineering puzzle, but instead have to redirect their time and resources to spreadsheets, financial reports, transaction records and other paperwork required to actually run the business.
“Startups and small businesses today simply don’t have the resources to manage their finances internally. Most of them still settle for spreadsheets, and the lucky ones work on an hourly basis with external accountants,” explains Seibert. “As a result, their accounting itself is seen as a cost-center, and they pay for little beyond the basic monthly financial statements — Profit & Loss, Balance Sheet, etc. By the time those statements are delivered — weeks after the end of each month — they’re already out of date,” he said.
That means things businesses need — like updates, one-off reports and new budgets — can require additional costs and longer wait times, so they get skipped.
The COVID-19 pandemic has put even more pressure on small businesses, many of which are now struggling to even survive. As a result, Digits has decided to launch the product for free to those who sign up — not a free trial, but actually free. It plans to later charge for additional products and paid upgrades to support its own business.
Digits is able to make this offer because of its now-expanded venture funding.
Already, the company had raised $10.5 million in Series A funding in a round led by Benchmark. That round had included a sizable 72 angel investors as well, including founders and CEOs from companies like Box, GitHub, Tinder, Twitch, StitchFix, SoFi and several others — entrepreneurs with an understanding of the problems Digits is aiming to solve.
Today, Digits is announcing an additional $22 million led by Jessica Verrilli at GV, who also now joins Digits’ board alongside Benchmark’s Peter Fenton. (Benchmark also participated in the new round).
“Jeff and Wayne are masterful at creating intuitive, high-utility products from complicated data,” said Verrilli about the GV investment. “I saw this up close with Crashlytics and Twitter, and I’m thrilled to partner with them on Digits as they reimagine financial software for startups,” she added.
The startup, now a team of 18 and hiring, was already offering its software solution to a group of customers ahead of today’s public launch, who effectively operated as beta testers allowing Digits to refine its product. Digits isn’t able to share its customer names, for the most part. However, it noted that Coda was one of early adopters and provided valuable feedback.
It also has over 10,000 companies who joined its waitlist over the past two years who are now being let in.
At the time of its Series A, Digits saw more than $1.5 billion in transaction value flowing across its production systems. That number has since grown to $8 billion.
The software is free starting today for U.S.-based small businesses. The company plans to add support for international markets later this year.
Facebook has agreed to block access to certain anti-government content to users in Vietnam, following months of having its services throttled there, reportedly by state-owned telecoms.
Reuters, citing sources within the company, reported that Vietnam requested earlier in the year that Facebook restrict a variety of content it deemed illegal, such as posts critical of the government. When the social network balked, the country used its control over local internet providers to slow Facebook traffic to unusable levels.