Widely held myths about sleep are damaging our health and our mood, as well as shortening our lives, say researchers.
A team at New York University trawled the internet to find the most common claims about a good night’s kip.
Then, in a study published in the journal Sleep Health, they matched the claims to the best scientific evidence.
They hope that dispelling sleep myths will improve people’s physical and mental health and well-being.
So, how many are you guilty of?
Myth 1 – You can cope on less than five hours’ sleep
This is the myth that just won’t go away.
Former
British Prime Minister Margaret Thatcher famously had a brief four
hours a night. German Chancellor Angela Merkel has made similar claims, and swapping hours in bed for extra time in the office is not uncommon in tales of business or entrepreneurial success.
Yet the researchers said the belief that less than five hours’ shut-eye was healthy, was one of the most damaging myths to health.
“We have
extensive evidence to show sleeping five hours or less consistently,
increases your risk greatly for adverse health consequences,” said
researcher Dr Rebecca Robbins.
These included cardiovascular diseases, such as heart attacks and strokes, and shorter life expectancy.
Instead, she recommends everyone should aim for a consistent seven to eight hours of sleep a night.
The relaxing nightcap is a myth, says the team, whether it’s a glass of wine, a dram of whisky or a bottle of beer.
“It may help you fall asleep, but it dramatically reduces the quality of your rest that night,” said Dr Robbins.
It particularly disrupts your REM (rapid eye movement) stage of sleep, which is important for memory and learning.
So yes, you will have slept and may have nodded off more easily, but some of the benefits of sleep are lost.
Alcohol is also a diuretic, so you may find yourself having to deal with a full bladder in the middle of the night too.
Myth 3 – Watching TV in bed helps you relax
Have you ever thought “I need to wind down before bed, I’m going to watch some TV”?
Well, the latest Brexit twists and turns on the BBC News at Ten might be bad for sleep.
Dr
Robbins argues: “Often if we’re watching the television it’s the
nightly news… it’s something that’s going to cause you insomnia or
stress right before bed when we’re trying to power down and relax.”
And as for Game of Thrones, it’s hard to argue the Red Wedding was relaxing.
The
other issue with TV – along with smartphones and tablets – is they
produce blue light, which can delay the body’s production of the sleep
hormone melatonin.
Myth 4 – If you’re struggling to sleep, stay in bed
You’ve spent so long trying to nod off you’ve managed to count all the sheep in New Zealand (that’s about 28 million).
So what should you do next? The answer is not to keep trying.
“We start to associate our bed with insomnia,” said Dr Robbins.
“It
does take the healthy sleeper about 15 minutes to fall asleep, but much
longer than that… make sure to get out of bed, change the environment
and do something that’s mindless.”
Her tip – go fold some socks.
Myth 5 – Hitting the snooze button
Who
isn’t guilty of reaching for the snooze button on their phone, thinking
that extra six minutes in bed is going to make all the difference?
But the research team says that when the alarm goes off, we should just get up.
Dr Robbins said: “Realise you will be a bit groggy – all of us are – but resist the temptation to snooze.
“Your body will go back to sleep, but it will be very light, low-quality sleep.”
Instead the advice is to throw open the curtains and expose yourself to as much bright light as possible.
Myth 6 – Snoring is always harmless
Snoring can be harmless, but it can also be a sign of the disorder sleep apnoea.
This causes the walls of the throat to relax and narrow during sleep, and can briefly stop people breathing.
People with the condition are more likely to develop high blood pressure, an irregular heartbeat and have a heart attack or a stroke.
Dr Robbins concludes: “Sleep is one of the most important things we can all do tonight to improve our health, our mood, our well-being and our longevity.”
Instagram still doesn’t offer an official method to post photos to the social network from your computer. That’s alright, though, because it’s still possible! Once you master the process, you’ll be glad you did. Editing photos on your computer and then having to sync them to your phone is an unnecessary and time-consuming step that just isn’t needed.
Whether you took a photo on a fancy camera or happened to find an old photo on a hard drive, the workaround makes it possible to quickly post to Instagram.
Upload from a desktop browser
Most browsers
have a way of letting you change the “user agent” — the thing that
tells a website what kind of device you’re on. So even when you’re on a
laptop or desktop, you can trick a website like Instagram into showing
you the mobile site. That’s what we’re going to do.
Safari instagram without cellphone
On Safari, it’s easy. Go to Safari > Preferences > Advanced. Check the box at the very bottom that says, Show Develop menu in menu bar. Now open a new Safari window or tab and click on Develop > User Agent > Safari — iOS 12.1.3 — iPhone in the menu bar.
Next, go to Instagram.com and sign into your account. At the bottom of the screen will be a +
icon — tap it and select a photo from your computer to upload. After
the photo is uploaded, you can still apply filters and edit the photo as
you would in the Instagram app.
When you’re done, make sure you change your user agent back to the Default setting to avoid viewing all websites in their mobile state.
Chrome
Uploading photos to Instagram from Chrome only takes a few clicks of the mouse.
Screenshot by Jason Cipriani/CNET
In Chrome, go to Instagram.com and sign in. Now right-click anywhere on the page then select Inspect from the list of options. Part
of the site will be covered up with the Inspector tool, but we only
really care about the small icon of a tablet and phone. It’s in the
top-left corner of the Inspector window — click it. The page will
refresh with a mobile view, and the + icon to create a post should show up at the bottom of the window. If it doesn’t, refresh the page.
When you’re done, click on the tablet/phone icon again. Close the Inspector tool and refresh the website to go back to the desktop version.
Originally published May 8, 2017. Update, April 9, 2019: Updated with new screenshots and new information.
I recently met with a group of managers to discuss ways to improve
meetings. Our goal was to figure out how to create a space that people
actually look forward to being in. We each began by describing a meeting
we remembered as especially powerful.
One story stood out.
My colleague told us about a time when he was a young engineer working on several project teams in a manufacturing facility. He said, “Josh, my manager, would take everyone out for pizza when he came to the factory, and we’d have a ‘no secrets’ meeting.
Josh asked us about whatever he wanted to know and we did the same in return. It was a meeting where everyone had permission to say or ask anything. It was amazing.”
Josh used these meetings to discover how his team was doing, how their projects were progressing, and what they needed in terms of support and resources. He asked broad questions to initiate open conversation:
What do you think I need to know?
Where are you struggling?
What are you proud of?
There was no pressure to have a perfect answer. The only requirement
was to be honest and sincere. Of course, it helped that Josh was a
thoughtful, authentic, and caring manager — qualities needed to create
the psychological safety such a conversation requires.
The quest for better meetings ultimately lies in leading with mutual
respectful, inclusivity, and establishing a space that is safe enough
for people to speak their minds. You may not need to do exactly what
Josh did, but you can increase the freedom, candor, and quality of
conversation in your own meetings by focusing on two key areas: giving
permission and creating safety.
Here’s how.
Let’s start with permission. Permission to say or ask anything is priceless. It allows us to fully express ourselves: to seek what we want, to give feedback, to speak up about issues when we find the need. By announcing that he would like to have a “no secrets” meeting, Josh was giving his team permission to display a level of candor that isn’t reached in most settings. He asked those who spoke not to hold back or edit their thoughts. He asked those who listened to give their peers a chance to be fully heard, which is what we all want — to say exactly what we are thinking and be respected for saying it.
In your own meetings, talk about permission up front — it’s best to address it directly rather than assume it’s already there. What permission would you like from the group so that you can lead effectively? What permission does the group need from you to successfully participate?
As a leader, ask your team permission to:
keep the conversation on track when it diverges or gets repetitive
call on people who have not yet spoken
hold people back if they are dominating the conversation
ask clarifying questions when you need someone to elaborate
Empower your team by reminding them that they have permission to:
ask questions at any time
invite colleagues into the conversation if they have not spoken
ask to spend extra time on a topic
ask other people to say more about where they stand on an issue
express concerns that haven’t been fully addressed
Finally, encourage your team (and yourself) to ask permission before
making a comment. It will help ensure that your comments are
non-threatening and received thoughtfully. Before speaking out, say:
May I ask you something?
May I tell you something?
May I give you some coaching?
May I push back a bit on what you are saying?
If that feels like too much to remember, the main takeaway is: You
and your team have a right to ask for whatever you need to be effective
in a meeting — to lead for results, to fully express yourselves, and to
add value to the discussion.
Now, let’s focus on safety. The degree to which a
person feels safe in a meeting setting is largely based on their
previous experiences. Many of us have — at one point or another —
experienced feeling as if we were not heard or appreciated when we spoke
up. But when people feel their comments will be listened to and treated
with respect, they are more likely to be vulnerable and say exactly
what they are thinking. Conversations become broader and deeper when
everyone is involved and feels safe enough to speak their minds. To
create psychological safety during a meeting:
ask the group to devote their full attention to each person who speaks (do this at the start of the meeting)
allow each person to take their time and complete their thoughts
ask follow-up questions for clarity if necessary
share what is valuable about someone’s question or comment
use people’s names and refer back to earlier comments they’ve made
invite people into the conversation who have not spoken
answer any and all questions truthfully
summarize what you learned as the meeting comes to an end
explain what actions you will take to put those insights to use and ask your team for their suggestions as well
acknowledge the quality of the conversation and thank the group for it
After the meeting, follow up by:
completing the action items by the deadlines you set
not sharing the conversation with others without permission
sending written thank you notes to participants (when appropriate)
following up with people to ensure their comments were addressed to their satisfaction
People don’t just want to belong, they want to contribute. You can give your team the opportunity to do so by applying the above principles. In the process of having more candid, mutually respectful conversations, your team will become more cohesive and able to work together more powerfully. They may even begin to look forward to your meetings because of the remarkable conversations that permission and safety create. And better still, you may even start to look forward to leading those meetings.
Paul Axtell is an author, speaker, and corporate trainer. He is the author of two award-winning books: Meetings Matter and the recently released second edition of Ten Powerful Things to Say to Your Kids. He has developed a training series, Being Remarkable, which is designed to be led by managers or HR
In a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group.
The Chinese government has drawn wide international condemnation for its harsh crackdown on ethnic Muslims in its western region, including holding as many as a million of them in detention camps.
Now,
documents and interviews show that the authorities are also using a
vast, secret system of advanced facial recognition technology to track
and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said.
The
facial recognition technology, which is integrated into China’s rapidly
expanding networks of surveillance cameras, looks exclusively for
Uighurs based on their appearance and keeps records of their comings and
goings for search and review. The practice makes China a pioneer in
applying next-generation technology to watch its people, potentially
ushering in a new era of automated racism.
The technology and its use to keep tabs on China’s 11 million Uighurs were described by five people with direct knowledge of the systems, who requested anonymity because they feared retribution. The New York Times also reviewed
databases used by the police, government procurement documents and
advertising materials distributed by the A.I. companies that make the
systems.
Chinese authorities already maintain a vast surveillance net, including tracking people’s DNA,
in the western region of Xinjiang, which many Uighurs call home. But
the scope of the new systems, previously unreported, extends that
monitoring into many other corners of the country. Shoppers
lined up for identification checks outside the Kashgar Bazaar last
fall. Members of the largely Muslim Uighur minority have been under
Chinese surveillance and persecution for years.CreditPaul Mozur
The
police are now using facial recognition technology to target Uighurs in
wealthy eastern cities like Hangzhou and Wenzhou and across the coastal
province of Fujian, said two of the people. Law enforcement in the
central Chinese city of Sanmenxia, along the Yellow River, ran a system
that over the course of a month this year screened whether residents were Uighurs 500,000 times.
Police
documents show demand for such capabilities is spreading. Almost two
dozen police departments in 16 different provinces and regions across
China sought such technology beginning in 2018, according to procurement
documents. Law enforcement from the central province of Shaanxi, for
example, aimed to acquire a smart camera system last year that “should
support facial recognition to identify Uighur/non-Uighur attributes.”
Some police departments and technology companies described the practice as “minority identification,” though three of the people
said that phrase was a euphemism for a tool that sought to identify
Uighurs exclusively. Uighurs often look distinct from China’s majority
Han population, more closely resembling people from Central Asia. Such
differences make it easier for software to single them out.
For
decades, democracies have had a near monopoly on cutting-edge
technology. Today, a new generation of start-ups catering to Beijing’s
authoritarian needs are beginning to set the tone for emerging
technologies like artificial intelligence. Similar tools could automate
biases based on skin color and ethnicity elsewhere.
“Take the most risky application of this technology, and chances are good someone is going to try it,” said Clare Garvie,
an associate at the Center on Privacy and Technology at Georgetown Law.
“If you make a technology that can classify people by an ethnicity,
someone will use it to repress that ethnicity.”
From
a technology standpoint, using algorithms to label people based on race
or ethnicity has become relatively easy. Companies like I.B.M. advertise software that can sort people into broad groups.
But China has broken new ground by identifying one ethnic group for law enforcement purposes. One Chinese start-up, CloudWalk, outlined a sample experience in marketing its own surveillance systems. The technology, it said, could recognize “sensitive groups of people.”
“If
originally one Uighur lives in a neighborhood, and within 20 days six
Uighurs appear,” it said on its website, “it immediately sends alarms”
to law enforcement.
In practice, the systems are imperfect, two of the people said. Often, their accuracy depends on environmental factors like lighting and the positioning of cameras.
In
the United States and Europe, the debate in the artificial intelligence
community has focused on the unconscious biases of those designing the
technology. Recent tests showed facial recognition systems made by companies like I.B.M. and Amazonwere less accurate at identifying the features of darker-skinned people.
China’s efforts raise starker issues. While facial recognition
technology uses aspects like skin tone and face shapes to sort images
in photos or videos, it must be told by humans to categorize people
based on social definitions of race or ethnicity. Chinese police, with
the help of the start-ups, have done that.
“It’s
something that seems shocking coming from the U.S., where there is most
likely racism built into our algorithmic decision making, but not in an
overt way like this,” said Jennifer Lynch, surveillance litigation
director at the Electronic Frontier Foundation. “There’s not a system
designed to identify someone as African-American, for example.”
The
Chinese A.I. companies behind the software include Yitu, Megvii,
SenseTime, and CloudWalk, which are each valued at more than $1 billion.
Another company, Hikvision, that sells cameras and software to process
the images, offered a minority recognition function, but began phasing
it out in 2018, according to one of the people.
The
companies’ valuations soared in 2018 as China’s Ministry of Public
Security, its top police agency, set aside billions of dollars under two
government plans, called Skynet and Sharp Eyes, to computerize surveillance, policing and intelligence collection.
In
a statement, a SenseTime spokeswoman said she checked with “relevant
teams,” who were not aware its technology was being used to profile.
Megvii said in a statement it was focused on “commercial not political
solutions,” adding, “we are concerned about the well-being and safety of
individual citizens, not about monitoring groups.” CloudWalk and Yitu
did not respond to requests for comment.
China’s Ministry of Public Security did not respond to a faxed request for comment.
Selling products with names like Fire Eye, Sky Eye and Dragonfly Eye,
the start-ups promise to use A.I. to analyze footage from China’s
surveillance cameras. The technology is not mature — in 2017 Yitu
promoted a one-in-three success rate when the police responded to its
alarms at a train station — and many of China’s cameras are not powerful
enough for facial recognition software to work effectively.
Yet
they help advance China’s architecture for social control. To make the
algorithms work, the police have put together face-image databases for
people with criminal records, mental illnesses, records of drug use, and
those who petitioned the government over grievances, according to two of the people
and procurement documents. A national database of criminals at large
includes about 300,000 faces, while a list of people with a history of
drug use in the city of Wenzhou totals 8,000 faces, they said.A security camera in a rebuilt section of the Old City in Kashgar, Xinjiang.CreditThomas Peter/Reuters
Using
a process called machine learning, engineers feed data to artificial
intelligence systems to train them to recognize patterns or traits. In
the case of the profiling, they would provide thousands of labeled
images of both Uighurs and non-Uighurs. That would help generate a
function to distinguish the ethnic group.
The
A.I. companies have taken money from major investors. Fidelity
International and Qualcomm Ventures were a part of a consortium that
invested $620 million
in SenseTime. Sequoia invested in Yitu. Megvii is backed by Sinovation
Ventures, the fund of the well-known Chinese tech investor Kai-Fu Lee.
A
Sinovation spokeswoman said the fund had recently sold a part of its
stake in Megvii and relinquished its seat on the board. Fidelity
declined to comment. Sequoia and Qualcomm did not respond to emailed
requests for comment.
Mr. Lee, a
booster of Chinese A.I., has argued that China has an advantage in
developing A.I. because its leaders are less fussed by “legal
intricacies” or “moral consensus.”
“We
are not passive spectators in the story of A.I. — we are the authors of
it,” Mr. Lee wrote last year. “That means the values underpinning our
visions of an A.I. future could well become self-fulfilling prophecies.”
He declined to comment on his fund’s investment in Megvii or its
practices.
Ethnic profiling within China’s tech industry isn’t a secret, the people said.
It has become so common that one of the people likened it to the
short-range wireless technology Bluetooth. Employees at Megvii were
warned about the sensitivity of discussing ethnic targeting publicly, another person said.
China has devoted major resources toward tracking Uighurs, citing
ethnic violence in Xinjiang and Uighur terrorist attacks elsewhere.
Beijing has thrown hundreds of thousands of Uighurs and others in
Xinjiang into re-education camps.
Government
procurement documents from the past two years also show demand has
spread. In the city of Yongzhou in southern Hunan Province, law
enforcement officials sought software to “characterize and search
whether or not someone is a Uighur,” according to one document.
In
two counties in Guizhou Province, the police listed a need for Uighur
classification. One asked for the ability to recognize Uighurs based on
identification photos at better than 97 percent accuracy. In the central
megacity of Chongqing and the region of Tibet, the police put out
tenders for similar software. And a procurement document for Hebei Province described how the police should be notified when multiple Uighurs booked the same flight on the same day.
A study in 2018 by the authorities described a use for other types of databases. Co-written by a Shanghai police official, the paper
said facial recognition systems installed near schools could screen for
people included in databases of the mentally ill or crime suspects.
One database generated by Yitu software and reviewed
by The Times showed how the police in the city of Sanmenxia used
software running on cameras to attempt to identify residents more than
500,000 times over about a month beginning in mid-February.
Included
in the code alongside tags like “rec_gender” and “rec_sunglasses” was
“rec_uygur,” which returned a 1 if the software believed it had found a
Uighur. Within the half million identifications the cameras attempted to
record, the software guessed it saw Uighurs 2,834 times. Images stored
alongside the entry would allow the police to double check.
Yitu
and its rivals have ambitions to expand overseas. Such a push could
easily put ethnic profiling software in the hands of other governments,
said Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology.
“I don’t think it’s overblown to treat this as an existential threat to democracy,” Mr. Frankle said. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”An undercover police officer in Kashgar.
Paul Mozur is a Shanghai-based technology reporter. He writes about Asia’s biggest tech companies, as well as cybersecurity, emerging internet cultures, censorship and the intersection of geopolitics and technology in Asia. He previously worked for The Wall Street Journal. @paulmozur
The tech giant [Google] records people’s locations worldwide.
Now, investigators are using it to find suspects and witnesses near crimes, running the risk of snaring the innocent
When detectives in a Phoenix suburb arrested a
warehouse worker in a murder investigation last December, they credited a
new technique with breaking open the case after other leads went cold.
The police told the suspect, Jorge Molina, they
had data tracking his phone to the site where a man was shot nine
months earlier. They had made the discovery after obtaining a search
warrant that required Google to provide information on all devices it
recorded near the killing, potentially capturing the whereabouts of
anyone in the area.
Investigators also had other circumstantial
evidence, including security video of someone firing a gun from a white
Honda Civic, the same model that Mr. Molina owned, though they could not
see the license plate or attacker.
But after he spent nearly a week in jail, the
case against Mr. Molina fell apart as investigators learned new
information and released him. Last month, the police arrested another
man: his mother’s ex-boyfriend, who had sometimes used Mr. Molina’s car.
The warrants, which draw on an enormous Google
database employees call Sensorvault, turn the business of tracking
cellphone users’ locations into a digital dragnet for law enforcement.
In an era of ubiquitous data gathering by tech companies, it is just the
latest example of how personal information — where you go, who your
friends are, what you read, eat and watch, and when you do it — is being
used for purposes many people never expected. As privacy concerns have
mounted among consumers, policymakers and regulators, tech companies have come under intensifying scrutiny over their data collection practices.
The Arizona case demonstrates the promise and
perils of the new investigative technique, whose use has risen sharply
in the past six months, according to Google employees familiar with the
requests. It can help solve crimes. But it can also snare innocent
people.
Technology companies have for years responded
to court orders for specific users’ information. The new warrants go
further, suggesting possible suspects and witnesses in the absence of
other clues. Often, Google employees said, the company responds to a
single warrant with location information on dozens or hundreds of
devices.
Law enforcement officials described the method as exciting, but cautioned that it was just one tool.
Opinion
The Privacy Project
The New York Times is launching an
ongoing examination of privacy. We’ll dig into the ideas, history and
future of how our information navigates the digital ecosystem and what’s
at stake.
“It doesn’t pop out the answer like a ticker
tape, saying this guy’s guilty,” said Gary Ernsdorff, a senior
prosecutor in Washington State who has worked on several cases involving
these warrants. Potential suspects must still be fully investigated, he
added. “We’re not going to charge anybody just because Google said they
were there.”
It is unclear how often these search requests
have led to arrests or convictions, because many of the investigations
are still open and judges frequently seal the warrants. The practice was
first used by federal agents in 2016, according to Google employees,
and first publicly reported last year in North Carolina. It has since spread to local departments across the country, including in California, Florida, Minnesota
and Washington. This year, one Google employee said, the company
received as many as 180 requests in one week. Google declined to confirm
precise numbers.
The technique illustrates a phenomenon privacy
advocates have long referred to as the “if you build it, they will come”
principle — anytime a technology company creates a system that could be
used in surveillance, law enforcement inevitably comes knocking.
Sensorvault, according to Google employees, includes detailed location
records involving at least hundreds of millions of devices worldwide and
dating back nearly a decade.
The
new orders, sometimes called “geofence” warrants, specify an area and a
time period, and Google gathers information from Sensorvault about the
devices that were there. It labels them with anonymous ID numbers, and
detectives look at locations and movement patterns to see if any appear
relevant to the crime. Once they narrow the field to a few devices they
think belong to suspects or witnesses, Google reveals the users’ names
and other information.
‘‘There are privacy concerns that we all have
with our phones being tracked — and when those kinds of issues are
relevant in a criminal case, that should give everybody serious pause,”
said Catherine Turner, a Minnesota defense lawyer who is handling a case
involving the technique.
Investigators who spoke with The New York Times
said they had not sent geofence warrants to companies other than
Google, and Apple said it did not have the ability to perform those
searches. Google would not provide details on Sensorvault, but Aaron
Edens, an intelligence analyst with the sheriff’s office in San Mateo
County, Calif., who has examined data from hundreds of phones, said most
Android devices and some iPhones he had seen had this data available
from Google.
In a statement, Richard Salgado, Google’s
director of law enforcement and information security, said that the
company tried to “vigorously protect the privacy of our users while
supporting the important work of law enforcement.” He added that it
handed over identifying information only “where legally required.”
Mr. Molina, 24, said he was shocked when the
police told him they suspected him of murder, and he was surprised at
their ability to arrest him based largely on data.
“I just kept thinking, You’re innocent, so
you’re going to get out,” he said, but he added that he worried that it
could take months or years to be exonerated. “I was scared,” he said.
A Novel Approach
Detectives have used the warrants for help with robberies, sexual assaults, arsons and murders. Last year, federal agents requested the data to investigate a string of bombings around Austin, Tex.
The approach has yielded useful information even if
it wasn’t what broke the case open, investigators said. In a home
invasion in Minnesota, for example, Google data showed a phone taking
the path of the likely intruder, according to a news report and police documents. But detectives also cited other leads, including a confidential informant, in developing suspects. Four people were charged in federal court.
According to several current and former Google
employees, the Sensorvault database was not designed for the needs of
law enforcement, raising questions about its accuracy in some
situations.
Though Google’s data cache is enormous, it
doesn’t sweep up every phone, said Mr. Edens, the California
intelligence analyst. And even if a location is recorded every few
minutes, that may not coincide with a shooting or an assault.
Google often doesn’t provide information right
away, investigators said. The Google unit handling the requests has
struggled to keep up, so it can take weeks or months for a response. In
the Arizona investigation, police received data six months after sending
the warrant. In a different Minnesota case this fall, it came in four
weeks.
But despite the drawbacks, detectives noted how
precise the data was and how it was collected even when people weren’t
making calls or using apps — both improvements over tracking that relies
on cell towers.
“It shows the whole pattern of life,” said Mark
Bruley, the deputy police chief in Brooklyn Park, Minn., where
investigators have been using the technique since this fall. “That’s the
game changer for law enforcement.”
A Trove of Data
Location data is a lucrative business — and
Google is by far the biggest player, propelled largely by its Android
phones. It uses the data to power advertising tailored to a person’s
location, part of a more than $20 billion market for location-based ads
last year.
In 2009, the company introduced Location History,
a feature for users who wanted to see where they had been. Sensorvault
stores information on anyone who has opted in, allowing regular
collection of data from GPS signals, cellphone towers, nearby Wi-Fi
devices and Bluetooth beacons.
People who turn on the feature can see a
timeline of their activity and get recommendations based on it. Google
apps prompt users to enable Location History for things like traffic
alerts. Information in the database is held indefinitely, unless the
user deletes it.
“We citizens are giving this stuff away,” said
Mr. Ernsdorff, the Washington State prosecutor, adding that if companies
were collecting data, law enforcement should be able to obtain a court
order to use it.
Current and former Google employees said they
were surprised by the warrants. Brian McClendon, who led the development
of Google Maps and related products until 2015, said he and other
engineers had assumed the police would seek data only on specific
people. The new technique, he said, “seems like a fishing expedition.”
Uncharted Legal Territory
The practice raises novel legal issues,
according to Orin Kerr, a law professor at the University of Southern
California and an expert on criminal law in the digital age.
One concern: the privacy of innocent people
scooped up in these searches. Several law enforcement officials said the
information remained sealed in their jurisdictions but not in every
state.
In Minnesota, for example, the name of an innocent man was released to a local journalist after it became part of the police record.
Investigators had his information because he was within 170 feet of a
burglary. Reached by a reporter, the man said he was surprised about the
release of his data and thought he might have appeared because he was a
cabdriver. “I drive everywhere,” he said.
These
searches also raise constitutional questions. The Fourth Amendment says
a warrant must request a limited search and establish probable cause
that evidence related to a crime will be found.
Warrants reviewed by The Times frequently
established probable cause by explaining that most Americans owned
cellphones and that Google held location data on many of these phones.
The areas they targeted ranged from single buildings to multiple blocks,
and most sought data over a few hours. In the Austin case, warrants
covered several dozen houses around each bombing location, for times
ranging from 12 hours to a week. It wasn’t clear whether Google
responded to all the requests, and multiple officials said they had seen
the company push back on broad searches.
Last year, the Supreme Court ruled
that a warrant was required for historical data about a person’s
cellphone location over weeks, but the court has not ruled on anything
like geofence searches, including a technique that pulls information on
all phones registered to a cell tower.
Google’s legal staff decided even before the
2018 ruling that the company would require warrants for location
inquiries, and it crafted the procedure that first reveals only
anonymous data.
“Normally we think of the judiciary as being
the overseer, but as the technology has gotten more complex, courts have
had a harder and harder time playing that role,” said Jennifer Granick,
surveillance and cybersecurity counsel at the American Civil Liberties
Union. “We’re depending on companies to be the intermediary between
people and the government.”
In several cases reviewed by The Times, a judge
approved the entire procedure in a single warrant, relying on
investigators’ assurances that they would seek data for only the most
relevant devices. Google responds to those orders, but Mr. Kerr said it
was unclear whether multistep warrants should pass legal muster.
Some jurisdictions require investigators to
return to a judge and obtain a second warrant before getting identifying
information. With another warrant, investigators can obtain more
extensive data, including months of location patterns and even emails.
Mixed Results
Investigators in Arizona have never publicly
disclosed a likely motive in the killing of Joseph Knight, the crime for
which Mr. Molina was arrested. In a court document,
they described Mr. Knight, a 29-year-old aircraft repair company
employee, as having no known history of drug use or gang activity.
Detectives sent the geofence warrant to Google
soon after the murder and received data from four devices months later.
One device, a phone Google said was linked to Mr. Molina’s account,
appeared to follow the path of the gunman’s car as seen on video. His
carrier also said the phone was associated with a tower in roughly the
same area, and his Google history showed a search about local shootings
the day after the attack.
After his arrest, Mr. Molina told officers that
Marcos Gaeta, his mother’s ex-boyfriend, had sometimes taken his car.
The Times found a traffic ticket showing that Mr. Gaeta, 38, had driven
that car without a license. Mr. Gaeta also had a lengthy criminal
record.
While
Mr. Molina was in jail, a friend told his public defender, Jack Litwak,
that she was with him at his home about the time of the shooting, and
she and others provided texts
and Uber receipts to bolster his case. His home, where he lives with
his mother and three siblings, is about two miles from the murder scene.
Mr. Litwak said his investigation found that
Mr. Molina had sometimes signed in to other people’s phones to check his
Google account. That could lead someone to appear in two places at
once, though it was not clear whether that happened in this case.
Mr. Gaeta was arrested in California on an Arizona warrant. He was then charged in a separate California homicide from 2016. Officials said that case would probably delay his extradition to Arizona.
A police spokesman said “new information came to light” after Mr. Molina’s arrest, but the department would not comment further.
Months after his release, Mr. Molina was having
trouble getting back on his feet. After being arrested at work, a
Macy’s warehouse, he lost his job. His car was impounded for
investigation and then repossessed.
The investigators “had good intentions” in
using the technique, Mr. Litwak said. But, he added, “they’re hyping it
up to be this new DNA type of forensic evidence, and it’s just not.”