Are you using a MacOs, Macbook, Macbook Pro, Macbook Air, and trying to screen share on your OBS Studio? You’ve come to the right blog, where I will discuss how to setup your security settings and how to use the display capture with a brief walkthrough.
If you’re a visual learner, here’s a video explaining everything from end to end!
Here’s a quick blog on how to use OBS software to screen share and security setting for display capture. The reason for the walkthrough is because I noticed a lot of people asking the same questions online and no immediate walkthrough related to how I googled the search phrase.
If you’re new, like myself, welcome to OBS, and this is my first OBS tutorial and walkthrough on how to use OBS as a screen share or display capture device. In this tutorial I’m attaching a video that has a walk through of how to setup the device capture which means “screen share” in those terms.
When doing a normal zoom call, you normally will need to ask for permission to screen share, with OBS you can turn your video into a single video production studio using OBS studio.
Chances are you installed OBS Studio recently and found this blog post.
26.10 OBS-Studio is the install I’m using.
Be sure to keep concurrent for latest patches, install the latest software, and install updates that include engineer development to problems you’re facing today.
Once installed display capture takes you down a rabbit hole if you’re a mac OS. Latest mac OS have more security settings and less ability for apps to be destructive. This is a positive for you and me. Although it makes this kind of experience a negative and may lower adoption.
My goal with this blog is to increase the adoption in this area and help people navigate to a better area.
Open your security & privacy settings to find your screen recording settings. We want to give OBS access because it’s a new application you just installed.
Find OBS in Privacy under Security & Privacy settings and change the screen recording settings for OBS to “checked.”
Once you’ve selected OBS in this checkbox, in the security and privacy settings, screen recording, OBS check box, it will prompt you to close OBS because it needs to restart to take advantage of these changes. Again if you are lost, try typing what you need in the top right.
Start OBS again, to use the changes you’ve just made to your system. Your display capture selections will now show what you are displaying, depending on the amount of monitors you’re using.
Instagram still doesn’t offer an official method to post photos to the social network from your computer. That’s alright, though, because it’s still possible! Once you master the process, you’ll be glad you did. Editing photos on your computer and then having to sync them to your phone is an unnecessary and time-consuming step that just isn’t needed.
Whether you took a photo on a fancy camera or happened to find an old photo on a hard drive, the workaround makes it possible to quickly post to Instagram.
Upload from a desktop browser
have a way of letting you change the “user agent” — the thing that
tells a website what kind of device you’re on. So even when you’re on a
laptop or desktop, you can trick a website like Instagram into showing
you the mobile site. That’s what we’re going to do.
Safari instagram without cellphone
On Safari, it’s easy. Go to Safari > Preferences > Advanced. Check the box at the very bottom that says, Show Develop menu in menu bar. Now open a new Safari window or tab and click on Develop > User Agent > Safari — iOS 12.1.3 — iPhone in the menu bar.
Next, go to Instagram.com and sign into your account. At the bottom of the screen will be a +
icon — tap it and select a photo from your computer to upload. After
the photo is uploaded, you can still apply filters and edit the photo as
you would in the Instagram app.
When you’re done, make sure you change your user agent back to the Default setting to avoid viewing all websites in their mobile state.
Uploading photos to Instagram from Chrome only takes a few clicks of the mouse.
Screenshot by Jason Cipriani/CNET
In Chrome, go to Instagram.com and sign in. Now right-click anywhere on the page then select Inspect from the list of options. Part
of the site will be covered up with the Inspector tool, but we only
really care about the small icon of a tablet and phone. It’s in the
top-left corner of the Inspector window — click it. The page will
refresh with a mobile view, and the + icon to create a post should show up at the bottom of the window. If it doesn’t, refresh the page.
When you’re done, click on the tablet/phone icon again. Close the Inspector tool and refresh the website to go back to the desktop version.
Originally published May 8, 2017. Update, April 9, 2019: Updated with new screenshots and new information.
I recently met with a group of managers to discuss ways to improve
meetings. Our goal was to figure out how to create a space that people
actually look forward to being in. We each began by describing a meeting
we remembered as especially powerful.
One story stood out.
My colleague told us about a time when he was a young engineer working on several project teams in a manufacturing facility. He said, “Josh, my manager, would take everyone out for pizza when he came to the factory, and we’d have a ‘no secrets’ meeting.
Josh asked us about whatever he wanted to know and we did the same in return. It was a meeting where everyone had permission to say or ask anything. It was amazing.”
Josh used these meetings to discover how his team was doing, how their projects were progressing, and what they needed in terms of support and resources. He asked broad questions to initiate open conversation:
What do you think I need to know?
Where are you struggling?
What are you proud of?
There was no pressure to have a perfect answer. The only requirement
was to be honest and sincere. Of course, it helped that Josh was a
thoughtful, authentic, and caring manager — qualities needed to create
the psychological safety such a conversation requires.
The quest for better meetings ultimately lies in leading with mutual
respectful, inclusivity, and establishing a space that is safe enough
for people to speak their minds. You may not need to do exactly what
Josh did, but you can increase the freedom, candor, and quality of
conversation in your own meetings by focusing on two key areas: giving
permission and creating safety.
Let’s start with permission. Permission to say or ask anything is priceless. It allows us to fully express ourselves: to seek what we want, to give feedback, to speak up about issues when we find the need. By announcing that he would like to have a “no secrets” meeting, Josh was giving his team permission to display a level of candor that isn’t reached in most settings. He asked those who spoke not to hold back or edit their thoughts. He asked those who listened to give their peers a chance to be fully heard, which is what we all want — to say exactly what we are thinking and be respected for saying it.
In your own meetings, talk about permission up front — it’s best to address it directly rather than assume it’s already there. What permission would you like from the group so that you can lead effectively? What permission does the group need from you to successfully participate?
As a leader, ask your team permission to:
keep the conversation on track when it diverges or gets repetitive
call on people who have not yet spoken
hold people back if they are dominating the conversation
ask clarifying questions when you need someone to elaborate
Empower your team by reminding them that they have permission to:
ask questions at any time
invite colleagues into the conversation if they have not spoken
ask to spend extra time on a topic
ask other people to say more about where they stand on an issue
express concerns that haven’t been fully addressed
Finally, encourage your team (and yourself) to ask permission before
making a comment. It will help ensure that your comments are
non-threatening and received thoughtfully. Before speaking out, say:
May I ask you something?
May I tell you something?
May I give you some coaching?
May I push back a bit on what you are saying?
If that feels like too much to remember, the main takeaway is: You
and your team have a right to ask for whatever you need to be effective
in a meeting — to lead for results, to fully express yourselves, and to
add value to the discussion.
Now, let’s focus on safety. The degree to which a
person feels safe in a meeting setting is largely based on their
previous experiences. Many of us have — at one point or another —
experienced feeling as if we were not heard or appreciated when we spoke
up. But when people feel their comments will be listened to and treated
with respect, they are more likely to be vulnerable and say exactly
what they are thinking. Conversations become broader and deeper when
everyone is involved and feels safe enough to speak their minds. To
create psychological safety during a meeting:
ask the group to devote their full attention to each person who speaks (do this at the start of the meeting)
allow each person to take their time and complete their thoughts
ask follow-up questions for clarity if necessary
share what is valuable about someone’s question or comment
use people’s names and refer back to earlier comments they’ve made
invite people into the conversation who have not spoken
answer any and all questions truthfully
summarize what you learned as the meeting comes to an end
explain what actions you will take to put those insights to use and ask your team for their suggestions as well
acknowledge the quality of the conversation and thank the group for it
After the meeting, follow up by:
completing the action items by the deadlines you set
not sharing the conversation with others without permission
sending written thank you notes to participants (when appropriate)
following up with people to ensure their comments were addressed to their satisfaction
People don’t just want to belong, they want to contribute. You can give your team the opportunity to do so by applying the above principles. In the process of having more candid, mutually respectful conversations, your team will become more cohesive and able to work together more powerfully. They may even begin to look forward to your meetings because of the remarkable conversations that permission and safety create. And better still, you may even start to look forward to leading those meetings.
Paul Axtell is an author, speaker, and corporate trainer. He is the author of two award-winning books: Meetings Matter and the recently released second edition of Ten Powerful Things to Say to Your Kids. He has developed a training series, Being Remarkable, which is designed to be led by managers or HR
In a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group.
The Chinese government has drawn wide international condemnation for its harsh crackdown on ethnic Muslims in its western region, including holding as many as a million of them in detention camps.
documents and interviews show that the authorities are also using a
vast, secret system of advanced facial recognition technology to track
and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said.
facial recognition technology, which is integrated into China’s rapidly
expanding networks of surveillance cameras, looks exclusively for
Uighurs based on their appearance and keeps records of their comings and
goings for search and review. The practice makes China a pioneer in
applying next-generation technology to watch its people, potentially
ushering in a new era of automated racism.
The technology and its use to keep tabs on China’s 11 million Uighurs were described by five people with direct knowledge of the systems, who requested anonymity because they feared retribution. The New York Times also reviewed
databases used by the police, government procurement documents and
advertising materials distributed by the A.I. companies that make the
Chinese authorities already maintain a vast surveillance net, including tracking people’s DNA,
in the western region of Xinjiang, which many Uighurs call home. But
the scope of the new systems, previously unreported, extends that
monitoring into many other corners of the country. Shoppers
lined up for identification checks outside the Kashgar Bazaar last
fall. Members of the largely Muslim Uighur minority have been under
Chinese surveillance and persecution for years.CreditPaul Mozur
police are now using facial recognition technology to target Uighurs in
wealthy eastern cities like Hangzhou and Wenzhou and across the coastal
province of Fujian, said two of the people. Law enforcement in the
central Chinese city of Sanmenxia, along the Yellow River, ran a system
that over the course of a month this year screened whether residents were Uighurs 500,000 times.
documents show demand for such capabilities is spreading. Almost two
dozen police departments in 16 different provinces and regions across
China sought such technology beginning in 2018, according to procurement
documents. Law enforcement from the central province of Shaanxi, for
example, aimed to acquire a smart camera system last year that “should
support facial recognition to identify Uighur/non-Uighur attributes.”
Some police departments and technology companies described the practice as “minority identification,” though three of the people
said that phrase was a euphemism for a tool that sought to identify
Uighurs exclusively. Uighurs often look distinct from China’s majority
Han population, more closely resembling people from Central Asia. Such
differences make it easier for software to single them out.
decades, democracies have had a near monopoly on cutting-edge
technology. Today, a new generation of start-ups catering to Beijing’s
authoritarian needs are beginning to set the tone for emerging
technologies like artificial intelligence. Similar tools could automate
biases based on skin color and ethnicity elsewhere.
“Take the most risky application of this technology, and chances are good someone is going to try it,” said Clare Garvie,
an associate at the Center on Privacy and Technology at Georgetown Law.
“If you make a technology that can classify people by an ethnicity,
someone will use it to repress that ethnicity.”
a technology standpoint, using algorithms to label people based on race
or ethnicity has become relatively easy. Companies like I.B.M. advertise software that can sort people into broad groups.
But China has broken new ground by identifying one ethnic group for law enforcement purposes. One Chinese start-up, CloudWalk, outlined a sample experience in marketing its own surveillance systems. The technology, it said, could recognize “sensitive groups of people.”
originally one Uighur lives in a neighborhood, and within 20 days six
Uighurs appear,” it said on its website, “it immediately sends alarms”
to law enforcement.
In practice, the systems are imperfect, two of the people said. Often, their accuracy depends on environmental factors like lighting and the positioning of cameras.
the United States and Europe, the debate in the artificial intelligence
community has focused on the unconscious biases of those designing the
technology. Recent tests showed facial recognition systems made by companies like I.B.M. and Amazonwere less accurate at identifying the features of darker-skinned people.
China’s efforts raise starker issues. While facial recognition
technology uses aspects like skin tone and face shapes to sort images
in photos or videos, it must be told by humans to categorize people
based on social definitions of race or ethnicity. Chinese police, with
the help of the start-ups, have done that.
something that seems shocking coming from the U.S., where there is most
likely racism built into our algorithmic decision making, but not in an
overt way like this,” said Jennifer Lynch, surveillance litigation
director at the Electronic Frontier Foundation. “There’s not a system
designed to identify someone as African-American, for example.”
Chinese A.I. companies behind the software include Yitu, Megvii,
SenseTime, and CloudWalk, which are each valued at more than $1 billion.
Another company, Hikvision, that sells cameras and software to process
the images, offered a minority recognition function, but began phasing
it out in 2018, according to one of the people.
companies’ valuations soared in 2018 as China’s Ministry of Public
Security, its top police agency, set aside billions of dollars under two
government plans, called Skynet and Sharp Eyes, to computerize surveillance, policing and intelligence collection.
a statement, a SenseTime spokeswoman said she checked with “relevant
teams,” who were not aware its technology was being used to profile.
Megvii said in a statement it was focused on “commercial not political
solutions,” adding, “we are concerned about the well-being and safety of
individual citizens, not about monitoring groups.” CloudWalk and Yitu
did not respond to requests for comment.
China’s Ministry of Public Security did not respond to a faxed request for comment.
Selling products with names like Fire Eye, Sky Eye and Dragonfly Eye,
the start-ups promise to use A.I. to analyze footage from China’s
surveillance cameras. The technology is not mature — in 2017 Yitu
promoted a one-in-three success rate when the police responded to its
alarms at a train station — and many of China’s cameras are not powerful
enough for facial recognition software to work effectively.
they help advance China’s architecture for social control. To make the
algorithms work, the police have put together face-image databases for
people with criminal records, mental illnesses, records of drug use, and
those who petitioned the government over grievances, according to two of the people
and procurement documents. A national database of criminals at large
includes about 300,000 faces, while a list of people with a history of
drug use in the city of Wenzhou totals 8,000 faces, they said.A security camera in a rebuilt section of the Old City in Kashgar, Xinjiang.CreditThomas Peter/Reuters
a process called machine learning, engineers feed data to artificial
intelligence systems to train them to recognize patterns or traits. In
the case of the profiling, they would provide thousands of labeled
images of both Uighurs and non-Uighurs. That would help generate a
function to distinguish the ethnic group.
A.I. companies have taken money from major investors. Fidelity
International and Qualcomm Ventures were a part of a consortium that
invested $620 million
in SenseTime. Sequoia invested in Yitu. Megvii is backed by Sinovation
Ventures, the fund of the well-known Chinese tech investor Kai-Fu Lee.
Sinovation spokeswoman said the fund had recently sold a part of its
stake in Megvii and relinquished its seat on the board. Fidelity
declined to comment. Sequoia and Qualcomm did not respond to emailed
requests for comment.
Mr. Lee, a
booster of Chinese A.I., has argued that China has an advantage in
developing A.I. because its leaders are less fussed by “legal
intricacies” or “moral consensus.”
are not passive spectators in the story of A.I. — we are the authors of
it,” Mr. Lee wrote last year. “That means the values underpinning our
visions of an A.I. future could well become self-fulfilling prophecies.”
He declined to comment on his fund’s investment in Megvii or its
Ethnic profiling within China’s tech industry isn’t a secret, the people said.
It has become so common that one of the people likened it to the
short-range wireless technology Bluetooth. Employees at Megvii were
warned about the sensitivity of discussing ethnic targeting publicly, another person said.
China has devoted major resources toward tracking Uighurs, citing
ethnic violence in Xinjiang and Uighur terrorist attacks elsewhere.
Beijing has thrown hundreds of thousands of Uighurs and others in
Xinjiang into re-education camps.
procurement documents from the past two years also show demand has
spread. In the city of Yongzhou in southern Hunan Province, law
enforcement officials sought software to “characterize and search
whether or not someone is a Uighur,” according to one document.
two counties in Guizhou Province, the police listed a need for Uighur
classification. One asked for the ability to recognize Uighurs based on
identification photos at better than 97 percent accuracy. In the central
megacity of Chongqing and the region of Tibet, the police put out
tenders for similar software. And a procurement document for Hebei Province described how the police should be notified when multiple Uighurs booked the same flight on the same day.
A study in 2018 by the authorities described a use for other types of databases. Co-written by a Shanghai police official, the paper
said facial recognition systems installed near schools could screen for
people included in databases of the mentally ill or crime suspects.
One database generated by Yitu software and reviewed
by The Times showed how the police in the city of Sanmenxia used
software running on cameras to attempt to identify residents more than
500,000 times over about a month beginning in mid-February.
in the code alongside tags like “rec_gender” and “rec_sunglasses” was
“rec_uygur,” which returned a 1 if the software believed it had found a
Uighur. Within the half million identifications the cameras attempted to
record, the software guessed it saw Uighurs 2,834 times. Images stored
alongside the entry would allow the police to double check.
and its rivals have ambitions to expand overseas. Such a push could
easily put ethnic profiling software in the hands of other governments,
said Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology.
“I don’t think it’s overblown to treat this as an existential threat to democracy,” Mr. Frankle said. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”An undercover police officer in Kashgar.
Paul Mozur is a Shanghai-based technology reporter. He writes about Asia’s biggest tech companies, as well as cybersecurity, emerging internet cultures, censorship and the intersection of geopolitics and technology in Asia. He previously worked for The Wall Street Journal. @paulmozur
After several years enjoying yoga for the healthy benefits, I’m starting to realize the monetizing missed because I was focused on getting healthy and flexible. There’s an entire world of people who do yoga, share it on social media, and pretend it’s big deal.
I challenge you to pretend to care about yoga, and in a big way.
Change someones way of yoga-ing.
Make up a new yogi-wordi.
Inspire some one go yoguh.
Pretending to care about Google+ shutdown is key to sounding really technical, but the truth to pretending about anything, is a good list of things to pretend to care about. Like Yoga!
Who doesn’t have a yoga time in their life?
Pretending to care about yoga is about as hip as it gets but to be full granola, you need to do at least 30 days straight, and share your adventures each day. Perfecting one pose is key to building a brand to giving a shit about yoga.
Being proud of doing nothing, in a picture, is really fun.
Watch how this yoga champion does nothing.
Make sure the image has a better view than your yoga skills, and you will always be the best on your instagram feed!
Benefiting your body with each movement, capturing it, and pretending yoga is your life for a few months will probably capture the attention of your relatives and closest friends!
#3 Pretend to care about yourself
Pretending to care about yourself is easy when you finally stop procrastinating about doing yoga.
Pretending to care about yourself is a great way to get social acceptance really quickly!
Be sure to blast out a lot of random shit to your friends and family over your cellphone.
Pretend others care about you caring about yourself!
If they are next to you, text them about how much better your day is than most people that are near you. But it’s good to have them near you.
Nothing says you’re having a good day like staring at your phone for an endless amount of time, without any thought about when the last time you took a break from facebook.
In an elevator? Get on your phone and tell someone about how great your day is and show them something you can do.
In the car, get on your phone and tell people how good you look.
#4 Pretend to care about other people
Find a quick minute to post about something, that implicates you in a good light. Drop politics, voting, all that important stuff – keep it simple. If you get a chance to update your facebook picture to something helpful looking, you’re gold for a few months.
Pretend you care about someone else, and sing a song for them.
But it’s really about the picture, and you don’t know anything about playing music.
When did caring about someone matter beyond the image, and if the image doesn’t happen, it didn’t happen. So that’s important to know before dealing with any non-image engagement. Screw that.
Nothing says you give a shit, like a picture of an old doctor.
You know I care when I have doctor eyes on you.
Look at my gold watch too, that screams I care about you, and your time.
Glad we had this important discussion. I’m married.
Post a hopeful picture on your facebook and pretend it’s you helping some old person, and say something like.
“It feels great to help others.”
#5 Pretend to care about a hipster.
Not all hipsters, just a hipster.
Wow, you don’t live in Austin do you.
Beards covering double chins are the new classic hipster.
Skinny jeans are comfortable but hey,… You damn hipster.
Let’s pretend hipsters are a big problem, and pretend we need to care about people who are super super hipster.
No not you, stock image hipster girl.
We are talking about all the hipsters that take a lot of time out of their week to prepare for practically zero interactions with people. But take an exuberant amount of time getting ready, preparing, and potentially exercising to the point that it’s considered weird.
Hipsters are equally as important as Google+ shutting down their entire social media forum.
Google is about as annoying as that annoying hipster person…
Deleting everything… lol
Come on now.
Deleting everything seems a bit rash.
But I understand, from a digital marketing perspective, it makes sense.
AI, Artificial Intelligence, is not scary and I want to set you at ease.
Right now, the only fear you have is the developer using artificial intelligence for bad.
Simple enough, right…
Consider the available technology. Video recognition using basic web cams, automated roaming robots in a maze, and all kinds of cool stuff.
AI, or artificial intelligence, is a mission based application.
It’s not scary, yet.
The AI, or application, performs a task or set of tasks.
It stores data, in a database, or in memory.
It gets smart or it does not get smart. I know that sounds weird but most things don’t really work and we live in a world of failure.
End of story.
Don’t be scared of AI…
AI learning is free.
The basic application of AI can be exposed by looking at how a developer at google uses tensorflow, a machine learning library in python, and teaches a video game to shoot a basketball – perfectly.
AI… It’s like saying, we want to get better at basketball…
There’s a lot of options when it comes to getting good at one sport. Let’s take away all the granularity of the sport and only talk about one aspect of the game…
We want to get better at SHOOTING.
We want to make a computer, get better at shooting a fake basketball, and that’s not scary.
But we don’t want to worry about actually playing the sport.
AI is like basketball, sorta
We want to reap the benefits of understanding exactly how hard to throw the basketball, to earn points. Because that’s the point of life right?
In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals.
Unless someone starts programming the robot dog to bark up the wrong tree, then we may need to worry.
AI could be scary if…
If we used image recognition and built AI to make killing people a successful mission. AI could be very scary if it could reason with attacking humans and use video to identify humans, using image recognition, and instead of opening doors for people, maybe it squishes their skull.
If the developer generates this possibility, it could happen.
Today, opening the door is what the robot dog does.
It makes an action, based on decisions it has been programmed to make. It is programmed to open a closed door. And if that program was set to recognize someones face, get close, and squish the face…
That would suck.
Robots will fuck up a watermelon.
Some robots are doing other things… Like…
Some robots are built to do other fun things. They are utilizing existing technology to have a bit of fun.
Artificial intelligence has a ways to go before we need to stress because programming a robot to make a decision to destroy it’s master would not likely be something any developer would be interested in doing.
We should avoid technology related to weaponizing robotics, and avoid the rare chance of military grade decision making robots, attacking people based on video rendering. If it goes wrong, we could quickly see some of this biggest killing sprees the world has ever seen.