The blog occasionally discusses digital marketing and AI. But it mostly discusses laws that impact the way we use technology.
This what your Almostism Blog Ad will look like to visitors! Of course you will want to use keywords and ad targeting to get the most out of your ad campaign! So purchase an ad space today before there all gone!
notice: Total Ad Spaces Available: (2) ad spaces remaining of (2)
Sundar Pichai, writing on Google blog, defined Artificial Intelligence (AI) this way: “AI is computer programming that learns and adapts.” He went on to further list out some amazing things one could do with AI, like sensors to predict wild fires, monitor cattle hygiene, diagnose diseases and more. Further he listed out 7 principles that, […] The post Seven questions of ethics facing Artificial Intelligence appeared first on Technology services...
Sundar Pichai, writing on Google blog, defined Artificial Intelligence (AI) this way: “AI is computer programming that learns and adapts.” He went on to further list out some amazing things one could do with AI, like sensors to predict wild fires, monitor cattle hygiene, diagnose diseases and more.
Further he listed out 7 principles that, according to Pichai, will guide the work that Google will carry out in the field of AI. These principles include sensitivity to people’s privacy and a commitment to upholding standards of scientific excellence.
Even while we stand at the cusp of a revolution that’s likely unprecedented in human history, there’s an elephant in the room a lot of people are looking away from: ethical issues.
Against the seven principles that Google claims it will follow, there are some serious questions that AI researchers, policymakers and people in general must face. These questions are based on universal ethics and can have far-reaching implications for virtually everything for the human race.
These questions on the ethics of AI involve a variety of things but always include the concept of individual liberty, the idea of a protective state and whether there’s a growing contradiction between the two. AI and ethics, therefore, span the entire spectrum of individualism vs collectivism.
It’s critical that we discuss and address questions on ethics of artificial intelligence because very soon it might be too late. Here are the top ethical questions in artificial intelligence:
Maybe the correct question to ask would have been “Can we?” rather than “How can we?”
To begin with artificial intelligence is created by humans who are promise to strong prejudices. After designing the algorithm, the machine is fed data to keep sharpening its ‘intelligence’.
The problem is if the data fed is racist by nature – showing more black criminals than white ones, for example – the machine will learn from the wrong kind of data. And its output will remain, at best, contentious.
The fact that most of the developments in AI comes from the private sector complicates matters. (China is an apparent exception because the Chinese government is very serious about AI, but that’s another story.) That’s because private sector isn’t that worried about being answerable to the general populace, unlike a government department which may be subject to intense scrutiny. This lack of transparency is worrying.
In some ways, this question is a derivative of the earlier one. Because developments in AI are mostly powered by the private sector, the degree of security could ultimately become a function of their business interests.
Businesses built in the digital economy haven’t always proven reliable when it comes to self-restraint and self-regulation. Facebook violating the data privacy of its users is often cited, correctly, as an example of how corporate greed soon overtakes enthusiastic corporate missions of being a good entity.
There are three questions of major significance one must address in light of security issues in AI.
One: What paradigms should corporates use to decide what levels of security are adequate?
Two: What degree of policing the corporates would be right?
Three: What checks and balances can corporates put in place to ensure the technology developed doesn’t end up with malicious actors? Remember, in today’s world we’re talking about multinationals who may find two governments with conflicting requirements.
Could AI be collecting information about you and keeping you in dark?
In 2018, a story in The New York Times reported this: “Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.”
Earlier, in 2014, Google took over one of world’s most important AI lab DeepMind. Something similar to Google’s Project Dragonfly happened: there were deep concerns about the way Google could lead DeepMind into. DeepMind has created neural network that plays video games like humans do. It’s AlphaGo beat the Go world champion Lee Sedol in 2016 (The game Go is considered far more complex than Chess).
An important announcement was made when Google took charge of DeepMind. An external review board was to be set up that would ensure the lab’s research does not end up in military applications. Today, no one knows whether the board even exists, let alone makes any decisions.
Both the examples from Facebook and Google tell a similar story of bent morals: commercial interests have frequently overtaken noble projects and there’s no reason to think AI will not end up the same way.
Without enough oversight and a tight policy framework, AI can be very deceptive.
One risk about is about AI falling into the wrong hands.
The other is even more grave: what if the AI is designed with malafide intentions to begin with?
Consider the self-flying selfie camera developed by Skydio.
Using 13 cameras that do visual tracking, the flying robot called R1 manages
itself – at the launch, you’ve to tell it what person or object the R1 must
follow (you “tell” R1 by a specially designed app that can sit in any
One click on the app and the robot will figure out the rest. It will read the surroundings, decode the obstacles, lock its target and will begin following.
The dangerous part is, you don’t even have to buy it; you can rent it for just $40 a day. For the price of one pack of Parents’ Choice ‘best value’ diapers, you can spy on anyone, anywhere for one full day.
While emerging technology can be breathtakingly exciting, it’s a serious mistake to launch the product commercially without understanding all the risks involved.
Even though it’s made to sound that you can buy a gun in the US as easily as you can buy a can of Coke, it’s not true. There’s some paperwork and background check involved in buying guns, as Euronews wrote.
Buying Skydio camera (mentioned above) is just one of the many, many devices that use AI and you can buy easily like a commodity from the open market. No questions asked.
Ironically, it’s been employees of organizations that have opposed deals that could have put AI to military (autonomous weapons, for instance) use, not someone from outside the organizations. For instance, employees of at least one company wrote an open letter to their CEO, questioning the stance, wisdom and policy to work with the military.
Protests are sometimes successful (Google moving out of a Pentagon project and from the Project Dragonfly). Sometimes they aren’t (Clarifai, the company to which the above open letter was directed is going ahead in its business with the US military).
In absence of detailed, strict and practical regulations, there’s no way of knowing what is cutting-edge and what is abominable.
China is using AI in some of the most innovative ways to bring in justice and stability. For instance, the 300 million cameras in China that track movement of people, enforce traffic discipline and deliver better, more efficient governance.
Critics are (rightly) wary of the way AI could be used by authoritarian governments like China. What keeps single-party governments – like China’s – from using AI to silence the voices that are unpleasant to the government?
With the help of face recognition technologies, for instance, China is able to not only track the whereabouts of “notorious elements” but also make traveling and buying air-tickets extremely difficult for people who are on the government’s blacklist.
The use of AI to contain and effectively suppress political dissidents is one of the major risks emerging in China, but that’s not to suggest other countries (their ruling parties, to be more specific) are any way immune to the greed of abusing AI and unleash a witch-hunt.
There’s an equally strong and logical argument that it’s important to make sure companies within Europe and the US too don’t start misusing artificial intelligence for their monetary gains; after all, it’s not just China that must be controlled, right?
While this argument is perfectly rational, there’s a class of people who oppose the idea of excessively regulating the AI industry in Europe or the US.
This is their principal argument: When you hold back European or American companies by red-tape, China isn’t going to wait. So effectively, you run a potential risk of China overtaking every other country in artificial intelligence.
This is called the “If we don’t, others surely will” mentality and there’s definitely some water in it. A defiant China could actually jeopardize a lot many things when it controls a technology that’s effectively banned in other countries.
There’s no two opinion that ethical questions in artificial intelligence are far too many and far too compelling to be taken lightly. All emerging technologies come with associated risks and AI is no exception.
More dialogue, more openness and international cooperation are probably going to work best for AI. We can only hope that developments in AI do not outpace the political will to develop the correct regulations.
The post Seven questions of ethics facing Artificial Intelligence appeared first on Technology services news.
Like it or not, practically none of your data is really private anymore. Apparently, public is the new private. It’s no secret that any data that you voluntarily share on the internet will no longer be private, whether you like it or not, whether you explicitly permit or not. Data privacy, at least data that’s online, is almost a misnomer. But […] The post From cars to supermarkets to apps: How much everyone knows about you appeared first on Technology...
It’s no secret that any data that you voluntarily share on the internet will no longer be private, whether you like it or not, whether you explicitly permit or not. Data privacy, at least data that’s online, is almost a misnomer.
But what about the data you haven’t expressly permitted anyone to share?
What do various business enterprises, data brokers and analytics agencies know about you?
Does your supermarket know if you’re cheating on your partner? Does your car know whether you’re on some subscription drug? Does your favorite app know what political party you’ll likely endorse? Does your cab aggregator know you’ve just been fired?
It looks like yes, they do.
Let’s begin with supermarkets, one of the oldest data capturing bodies.
Supermarkets have collected and analysed data since long. One of the most popular methods of harnessing customer data was loyalty programs and cards.
But the increase in computing power that Big Data brought suddenly began allowing systems to make sense out of reams and reams of paper full of almost unimaginable magnitude.
Here’s how supermarkets collect data about you and the way it’s used.
In this article, Charles Duhigg, the author of The Power of Habit: Why We Do What We Do in Life and Business, explains how data analysis can produce unbelievable results.
Much as you’d like to drive a ‘connected’ car, there’s a lot of data the car collects about you.
Not surprisingly, you don’t know what all the car knows about you, where’s the data sent to, how it is processed and used and whether you can do anything about it.
Yes, this data can be very useful for stuff like service reminders, vehicle usage patterns and other things that can make your driving safer and more pleasurable. That said, there’s a good deal of information that seeps out without you know it, including the information you may be reluctant to share otherwise.
The Zebra created a neat infographic about what you car knows about you. Here’s some of the things your car knows about you:
When relatively low-tech areas like cars and supermarkets can collect so much data about you, how can you expect smartphone apps to not probe further and practically know you inside out?
In the remote likelihood that you might have forgotten what happened with the fitness app Strava, here’s a quick recap:
Strava, like most other fitness apps, encourages users to record their activities and let the app access its location.
If I live in downtown Bronx and I see someone in my neighborhood has jogged 3 miles today, it rubs my ego the wrong way and makes me run 3.5 miles. Good intentions, basically.
Unfortunately, it led to what is termed a security threat. A number of US military personnel are Strava users too. When they let the app access their locations, they inadvertently disclosed where they themselves were stationed. That also exposed where American military was located currently and also their supply and logistics routes.
Not exactly something you’d like to be proud of, right?
Apps like Strava make people pore closely over every single calorie they burnt, every step they walked, every yard they pedaled.
Exercising produces the opioid hormone called endorphin. Such hormones lead to a feeling of euphoria (remember the aha feeling after a round of sit-ups?).
This euphoria may be the culprit in making disciplined military personnel share their locations on Strava.
Apps reminds you of privacy policies, right? The ones that must “I agree” before you can use the app.
Well, they could do with a bit of simpler wording. A post rightly observed how Privacy Policies, written in almost unreadable legalese, are nearly impossible to read.
From a pessimistic point of view, there’s not going to to be any data privacy if you use any online tools. At least not the way it was back in the 20th century.
Data is the new currency with which you pay for the usage of some apps. It doesn’t matter if the app is free (Facebook app) or paid (Procreate or Pocket Casts) – your data will always be at risk.
For instance, it’s clear that Facebook knows a lot more about you than you’d every believe. Not only that, Facebook may be sharing your data with others secretly.
Apparently, there was never a better time to use the old parting words:
The post From cars to supermarkets to apps: How much everyone knows about you appeared first on Technology services news.
It’s unlikely you haven’t noticed the Facebook 10 year challenge, if you’re on any social media platform. The apparently simple exercise requires you to post your 2 photos side by side, one from ten years back and the other your current photo. The company claims it’s a user-generated meme movement, which isn’t impossible. However, few […] The post 6 ways Facebook 10 year challenge is helping their AI ambitions appeared first on Technology services...
It’s unlikely you haven’t noticed the Facebook 10 year challenge, if you’re on any social media platform. The apparently simple exercise requires you to post your 2 photos side by side, one from ten years back and the other your current photo.
The company claims it’s a user-generated meme movement, which isn’t impossible.
However, few people realize how Facebook’s 10 year challenge could possibly benefit the company’s Artificial Intelligence ambitions in face recognition.
On the face of it, the Facebook 10 year challenge is a simple exercise. You find out your photo from about 10 years back (i.e. 2009). Then you place it next to your most recent photo (i.e. 2019 photo).
The juxtaposition shows how you have changed over the ensuing 10 years. Your friends and followers can view this and leave back their comments or reactions. This fun exercise can trigger emotions like nostalgia.
Sounds pretty innocent and harmless, eh?
Basically facial recognition technology identifies people from digital images. Primarily, it can perform the three functions below:
Wikipedia and other sources credit Woody Bledsoe, Helen Chan Wolf, and Charles Bisson as the pioneers of the technology behind automated facial recognition technology.
It has widespread applications, from policing and preventing frauds by stopping miscreants from entering a public event to simpler goals like school or factory attendance.
If you thought facial recognition is as simple as overlapping one image over another, you couldn’t be more wrong.
It doesn’t happen that way.
No two different images will match precisely in terms of lighting, head tilt, distance and so on. As a result the recognition system must be smart. It must be able to take a few decisions itself, based on what it learns.
This learning can be enriched only if you feed the system with a variety of data. Data collection for this training is time-consuming and therefore expensive. And yet, you can’t always get the variety of data you want.
What if you could get all this for free?
That’s exactly what is happening. You could be training Facebook’s AI with the 10 year challenge.
Like hundreds of thousands doing it everyday.
What could have cost Facebook billions of dollars, you’re willingly doing it for free.
By participating in the Facebook 10 year challenge, you are not just giving FB access to your photos, you’re basically training its AI system.
“…how all this data could be mined to train facial recognition algorithms on age progression and age recognition” The New York Times reported this now-famous tweet by author Kate O’Neill.
If you look at the Facebook 10-year challenge with a suspecting eye, a lot of pieces fall into place:
What do you think? Is Facebook getting an upper hand in the race?
The post 6 ways Facebook 10 year challenge is helping their AI ambitions appeared first on Technology services news.
A recent investigation by The New York Times indicates Facebook may have given access to more data to other companies than it told anyone about. Is it possible that Facebook has been cheating on millions of its users worldwide and also governments and investigating agencies? A recent investigative report by The New York Times clearly points […] The post Facebook shared your data with others – secretly appeared first on Technology services...
Is it possible that Facebook has been cheating on millions of its users worldwide and also governments and investigating agencies? A recent investigative report by The New York Times clearly points Facebook has been sharing more “intrusive access to users’ personal data” than it told its users or governments about.
Worse still, Facebook may have been doing this for years now.
On December 18, 2018 The New York Times published a well-researched article about how Facebook has been handling users’ data. As per the post, there’s a huge gap between what Facebook tells its users and authorities about the way it shares users’ data and the way it actually does.
Apparently, Facebook has been giving access to its users’ data to some of the largest companies in the world.
Companies that have had access to Facebook users’ data include Amazon, Bing, Sony, Netflix, Spotify, Royal Bank of Canada, Yahoo… Needless to add, this access appears both illegal and unethical.
The list of companies that had access to Facebook users’ data reads eerily like Fortune 500.
The New York Times investigation showed that Facebook had made several deals with over 60 brands of smartphones, tables and other devices to let these makes have access to Facebook users’ data. Here’s a list of what kind of data was available to some of them (note that all this, without users’ permission, was illegal):
Essentially, Facebook allowed the companies mentioned above, and many more, access to users’ data without express permission from users.
Not only that, it appears Facebook had not been fully honest in what it disclosed to authorities.
The biggest reason it is unfair and unethical (and possibly illegal) is this: the companies that were given access to Facebook user’s data were termed partners and were accorded special status. As a result, they were not subjected to extensive privacy program reviews.
In other words, Facebook seemed to have relaxed its rules for these companies.
Here are some other reasons why Facebook’s sharing of data is unfair and unethical:
Facebook spokespersons are not sitting silently; they have been issuing their own versions of the truth and offering justifications and explanations.
Here are some of the explanations Facebook is putting up in its own favor:
In Part 1 of China’s Social Credit System, we covered the basics of the credit system of China. We talked about the weaknesses of the current credit system in China and compared the current credit score system in developed countries like US, Germany, Switzerland and so on. Next, we identified the 4 principles behind the […] The post Social Credit System China Part 2: Implementation, Benefits, Criticism appeared first on Technology services...
In Part 1 of China’s Social Credit System, we covered the basics of the credit system of China. We talked about the weaknesses of the current credit system in China and compared the current credit score system in developed countries like US, Germany, Switzerland and so on.
Next, we identified the 4 principles behind the proposed system and the objectives the system seeks to achieve. We ended with an infographic on the 14 focus areas of the system.
In this 2nd and final part, we talk about how the social credit system of China will be implemented, what are its benefits – from the point of view of the Chinese government – and what are the criticisms leveled against the proposed system.
The Social Credit System of China has the goal of establishing the basic structure of a credit system by 2020. That goal wishes to achieve objectives like:
The time-line of the history and implementation of China’s Social Credit System can be roughly represented in the following way:
Exactly what technology will be used – or is already in use – is not clear at this stage. And that is partly understandable: if the authorities were to expose everything, the risk of gamification of the system would increase manifold.
Sources used include:
The post Social Credit System China Part 2: Implementation, Benefits, Criticism appeared first on Technology services news.
“Marketers today must be part artist, part scientist” says Michelle Urban from Marketing 261. In that small phrase, she packs a lot of punch, as also what the future holds for marketing. From the days when marketing meant sending out fancy ads and offering great discounts, it has evolved into a craft, a complex profession with […] The post Interview with Michelle Urban of Marketing 261 appeared first on Technology services...
“Marketers today must be part artist, part scientist” says Michelle Urban from Marketing 261. In that small phrase, she packs a lot of punch, as also what the future holds for marketing.
From the days when marketing meant sending out fancy ads and offering great discounts, it has evolved into a craft, a complex profession with science and art in almost equal measures. Internet and its metrics and tools of measurement are fast making marketing an exact science.
On the other hand, the unprecedented changes that technology keeps bringing in our lives keeps marketing from becoming a predictable, hum-drum activity.
We spoke to Michelle to understand what she thought of brick-and-mortar businesses embracing the internet, AI, executive buy-in and a lot more. Here goes: (Scroll down for an infographic.)
We hear Content is King so often. In a crowded marketplace, how do you suggest bringing readers to your blog when everyone is producing a lot of content and when readers’ attention span is continually shrinking?
Don’t write for the sake of writing. Write only for your target audience. Write about how they can work through their challenges, pain points, and obstacles. Write about how they can reach their goals and how they can be more successful in their job. Give them useful and practical content.
Write about how they can reach their goals and how they can be more successful in their job. Give them useful and practical content.
If your readership is quickly skimming your content and bouncing off, your content is not geared towards their needs. When it’s not relevant or interesting chances are readers are not going to engage or return back. Make your content inspiring and educational to your target audience.
There are still a large number of successful, brick-and-mortar businesses that haven’t embraced the digital space. How do you think they should go about building their brand online and make sure their voice is heard, especially if even their customers aren’t frequent on the internet?
In this day and age, it’s silly for anyone NOT to have a website. Website show credibility and when done correctly, social credibility. All websites should be optimized for mobile and local search.
What are the three skills you rate as most important for a digital marketer in today’s world?
1. Be resourceful
2. Be part artist AND part scientist.
3. Be a risk taker
1. Be resourceful
2. Be part artist AND part scientist.
3. Be a risk taker
Businesses have begun investing in digital marketing, but there’s still some resistance when it comes to paying for tools and services that don’t directly lead to marketing (e.g. SEO tools, email verification, analytics tool etc). How should marketers go about getting top-level executive buy-in for such matters?
Whether a marketer is asking for new tools, to sponsor an event, invest in new programs, or double down on an existing channel, the best way to get buy-in is by letting the metrics do the talking. Break down how the line item is going to help reach the company goal.
Executives speak one language and that is revenue. Provide the details that support the positive ROI. If you cannot show this, chances are you don’t need it.
Executives speak one language and that is revenue. Provide the details that support the positive ROI. If you cannot show this, chances are you don’t need it.
For the few industries that don’t expect too much of business coming from online inquiries (e.g. heavy engineering) in the next few years, how do you suggest they should go approach their online marketing efforts?
Your brand matters – bottom line. In today’s day and age, your brand needs to expand to the web in some way shape or form. If you’re in a field that is not web forward, chances are a potential buyer is going to be Googling something pertaining to your brand – the owners, the investors, the competitors. Having an online presence, like a website, can show credibility and social proof.
Artificial Intelligence (AI) is fast becoming a threat to many professions. How do you think marketers and freelancers can tackle that?
I can see many marketers using AI to increase their productivity. AI algorithms can help automate the repetitive tasks that many freelancers do on a weekly or monthly basis. This leads to increase productivity, which saves time and money.
Do you think customer loyalty will be a realistic goal to pursue over the next few years, given the enormous competition everywhere?
Yes, without a doubt. Companies should put their customers as #1 priority. This means building meaningful and lasting relationships with your customers and users.
Companies should put their customers as #1 priority. This means building meaningful and lasting relationships with your customers and users.
So even if they move away from being your customer, they can still help to promote your brand positively due to the great experience they had while engaging with your product/service. Always leave the door open for your customers to return to you quickly.
Michelle Urban is the founder of Marketing 261, a marketing shop for tech startups and small businesses. With a hands-on, get-it-done attitude, she and her team focus on executing measurable plans to get real results. For over 16 years, she’s built scalable marketing programs for demand creation, lead generation, customer advocacy, and engagement. A few of her clients include productboard, Rancher Labs, Layer, BetterManager, and more.
Or if you prefer use one of our linkware images? Click here
If you are the owner of Almostism, or someone who enjoys this blog why not upgrade it to a Featured Listing or Permanent Listing?