Blogging Fusion Blog Directory the #1 blog directory and oldest directory online.

Almostism

Home Almostism

Almostism

Rated: 1.00 / 5 | 447 listing views Almostism Blogging Fusion Blog Directory

India

 

General Audience

  • Mayank Batavia
  • October 09, 2018 01:15:39 PM
SHARE THIS PAGE ON:

A Little About Us

The blog occasionally discusses digital marketing and AI. But it mostly discusses laws that impact the way we use technology.

Listing Details

Listing Statistics

Add ReviewMe Button

Review Almostism at Blogging Fusion Blog Directory

Add SEO Score Button

My Blogging Fusion Score

Google Adsense™ Share Program

Alexa Web Ranking: N/A

Alexa Ranking - Almostism

Example Ad for Almostism

This what your Almostism Blog Ad will look like to visitors! Of course you will want to use keywords and ad targeting to get the most out of your ad campaign! So purchase an ad space today before there all gone!

https://www.bloggingfusion.com

.

notice: Total Ad Spaces Available: (2) ad spaces remaining of (2)

Advertise Here?

  • Blog specific ad placement
  • Customize the title link
  • Place a detailed description
  • It appears here within the content
  • Approved within 24 hours!
  • 100% Satisfaction
  • Or 3 months absolutely free;
  • No questions asked!

Subscribe to Almostism

Seven questions of ethics facing Artificial Intelligence

Sundar Pichai, writing on Google blog, defined Artificial Intelligence (AI) this way: “AI is computer programming that learns and adapts.” He went on to further list out some amazing things one could do with AI, like sensors to predict wild fires, monitor cattle hygiene, diagnose diseases and more. Further he listed out 7 principles that, […] The post Seven questions of ethics facing Artificial Intelligence appeared first on Technology services...

Sundar Pichai, writing on Google blog, defined Artificial Intelligence (AI) this way: “AI is computer programming that learns and adapts.” He went on to further list out some amazing things one could do with AI, like sensors to predict wild fires, monitor cattle hygiene, diagnose diseases and more.

Further he listed out 7 principles that, according to Pichai, will guide the work that Google will carry out in the field of AI. These principles include sensitivity to people’s privacy and a commitment to upholding standards of scientific excellence.

Even while we stand at the cusp of a revolution that’s likely unprecedented in human history, there’s an elephant in the room a lot of people are looking away from: ethical issues.

What are the ethical dilemmas associated with AI?

Against the seven principles that Google claims it will follow, there are some serious questions that AI researchers, policymakers and people in general must face. These questions are based on universal ethics and can have far-reaching implications for virtually everything for the human race.

These questions on the ethics of AI involve a variety of things but always include the concept of individual liberty, the idea of a protective state and whether there’s a growing contradiction between the two. AI and ethics, therefore, span the entire spectrum of individualism vs collectivism.

It’s critical that we discuss and address questions on ethics of artificial intelligence because very soon it might be too late. Here are the top ethical questions in artificial intelligence:

1. How can we minimize or eliminate bias in the algorithm of AI?

Maybe the correct question to ask would have been “Can we?” rather than “How can we?”

To begin with artificial intelligence is created by humans who are promise to strong prejudices. After designing the algorithm, the machine is fed data to keep sharpening its ‘intelligence’.

The problem is if the data fed is racist by nature – showing more black criminals than white ones, for example – the machine will learn from the wrong kind of data. And its output will remain, at best, contentious.

The fact that most of the developments in AI comes from the private sector complicates matters. (China is an apparent exception because the Chinese government is very serious about AI, but that’s another story.) That’s because private sector isn’t that worried about being answerable to the general populace, unlike a government department which may be subject to intense scrutiny. This lack of transparency is worrying.

2. How secure will the AI be?

In some ways, this question is a derivative of the earlier one. Because developments in AI are mostly powered by the private sector, the degree of security could ultimately become a function of their business interests.

Businesses built in the digital economy haven’t always proven reliable when it comes to self-restraint and self-regulation. Facebook violating the data privacy of its users is often cited, correctly, as an example of how corporate greed soon overtakes enthusiastic corporate missions of being a good entity.

There are three questions of major significance one must address in light of security issues in AI.

One: What paradigms should corporates use to decide what levels of security are adequate?

Two: What degree of policing the corporates would be right?

Three: What checks and balances can corporates put in place to ensure the technology developed doesn’t end up with malicious actors? Remember, in today’s world we’re talking about multinationals who may find two governments with conflicting requirements.

3. How can we stop AI from being deceptive?

Could AI be collecting information about you and keeping you in dark?

In 2018, a story in The New York Times reported this: “Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.”

Earlier, in 2014, Google took over one of world’s most important AI lab DeepMind. Something similar to Google’s Project Dragonfly happened: there were deep concerns about the way Google could lead DeepMind into. DeepMind has created neural network that plays video games like humans do. It’s AlphaGo beat the Go world champion Lee Sedol in 2016 (The game Go is considered far more complex than Chess).

An important announcement was made when Google took charge of DeepMind. An external review board was to be set up that would ensure the lab’s research does not end up in military applications. Today, no one knows whether the board even exists, let alone makes any decisions.

Both the examples from Facebook and Google tell a similar story of bent morals: commercial interests have frequently overtaken noble projects and there’s no reason to think AI will not end up the same way.

Without enough oversight and a tight policy framework, AI can be very deceptive.

4. What can we do to stop AI from being malicious?

One risk about is about AI falling into the wrong hands.

The other is even more grave: what if the AI is designed with malafide intentions to begin with?

Consider the self-flying selfie camera developed by Skydio. Using 13 cameras that do visual tracking, the flying robot called R1 manages itself – at the launch, you’ve to tell it what person or object the R1 must follow (you “tell” R1 by a specially designed app that can sit in any smartphone).

That’s it.

One click on the app and the robot will figure out the rest. It will read the surroundings, decode the obstacles, lock its target and will begin following.

The dangerous part is, you don’t even have to buy it; you can rent it for just $40 a day. For the price of one pack of Parents’ Choice ‘best value’ diapers, you can spy on anyone, anywhere for one full day.

While emerging technology can be breathtakingly exciting, it’s a serious mistake to launch the product commercially without understanding all the risks involved.

5. How far can you trust something that’s largely unregulated?

Even though it’s made to sound that you can buy a gun in the US as easily as you can buy a can of Coke, it’s not true. There’s some paperwork and background check involved in buying guns, as Euronews wrote.

Buying Skydio camera (mentioned above) is just one of the many, many devices that use AI and you can buy easily like a commodity from the open market. No questions asked.

Ironically, it’s been employees of organizations that have opposed deals that could have put AI to military (autonomous weapons, for instance) use, not someone from outside the organizations. For instance, employees of at least one company wrote an open letter to their CEO, questioning the stance, wisdom and policy to work with the military.

Protests are sometimes successful (Google moving out of a Pentagon project and from the Project Dragonfly). Sometimes they aren’t (Clarifai, the company to which the above open letter was directed is going ahead in its business with the US military).

In absence of detailed, strict and practical regulations, there’s no way of knowing what is cutting-edge and what is abominable.

6. Is there any way AI can be kept from being used for political vendetta?

China is using AI in some of the most innovative ways to bring in justice and stability. For instance, the 300 million cameras in China that track movement of people, enforce traffic discipline and deliver better, more efficient governance.

Critics are (rightly) wary of the way AI could be used by authoritarian governments like China. What keeps single-party governments – like China’s – from using AI to silence the voices that are unpleasant to the government?

With the help of face recognition technologies, for instance, China is able to not only track the whereabouts of “notorious elements” but also make traveling and buying air-tickets extremely difficult for people who are on the government’s blacklist.

The use of AI to contain and effectively suppress political dissidents is one of the major risks emerging in China, but that’s not to suggest other countries (their ruling parties, to be more specific) are any way immune to the greed of abusing AI and unleash a witch-hunt.

7. What about the odd risk of “If we don’t, others surely will”?

There’s an equally strong and logical argument that it’s important to make sure companies within Europe and the US too don’t start misusing artificial intelligence for their monetary gains; after all, it’s not just China that must be controlled, right?

While this argument is perfectly rational, there’s a class of people who oppose the idea of excessively regulating the AI industry in Europe or the US.

This is their principal argument: When you hold back European or American companies by red-tape, China isn’t going to wait. So effectively, you run a potential risk of China overtaking every other country in artificial intelligence.

This is called the “If we don’t, others surely will” mentality and there’s definitely some water in it. A defiant China could actually jeopardize a lot many things when it controls a technology that’s effectively banned in other countries.

Conclusion

There’s no two opinion that ethical questions in artificial intelligence are far too many and far too compelling to be taken lightly. All emerging technologies come with associated risks and AI is no exception.

More dialogue, more openness and international cooperation are probably going to work best for AI. We can only hope that developments in AI do not outpace the political will to develop the correct regulations.

Featured image: Photo by Evan Dennis on Unsplash

The post Seven questions of ethics facing Artificial Intelligence appeared first on Technology services news.


From cars to supermarkets to apps: How much everyone knows about you

Like it or not, practically none of your data is really private anymore. Apparently, public is the new private. It’s no secret that any data that you voluntarily share on the internet will no longer be private, whether you like it or not, whether you explicitly permit or not. Data privacy, at least data that’s online, is almost a misnomer. But […] The post From cars to supermarkets to apps: How much everyone knows about you appeared first on Technology...

Like it or not, practically none of your data is really private anymore. Apparently, public is the new private.

It’s no secret that any data that you voluntarily share on the internet will no longer be private, whether you like it or not, whether you explicitly permit or not. Data privacy, at least data that’s online, is almost a misnomer.

But what about the data you haven’t expressly permitted anyone to share?

What do various business enterprises, data brokers and analytics agencies know about you?

Does your supermarket know if you’re cheating on your partner? Does your car know whether you’re on some subscription drug? Does your favorite app know what political party you’ll likely endorse? Does your cab aggregator know you’ve just been fired?

It looks like yes, they do.

Let’s begin with supermarkets, one of the oldest data capturing bodies.

What supermarkets know about you

Supermarkets have collected and analysed data since long. One of the most popular methods of harnessing customer data was loyalty programs and cards.

But the increase in computing power that Big Data brought suddenly began allowing systems to make sense out of reams and reams of paper full of almost unimaginable magnitude.

Here’s how supermarkets collect data about you and the way it’s used.

  • Sales: What you buy says truckloads about you. Based on the small changes in your sales patterns, supermarkets and big retailers can predict if you’re on a diet (obviously), expecting a baby, divorcing or getting married, switching jobs and a lot, lot more. This info is used to figure out not only how to better lay out products so that you don’t miss them but also to learn what offers you’ll find irresistible.
  • Browsing: Did you stop by the organic cereal rack? The CCTV could pick up subtle cues of how long customers stop where and what do they ultimately end up buying. Smart visual recognition systems will ultimately decipher what combos did you browse before finally making a decision.
  • Loyalty: How likely are you to be swayed a competing retailer’s offer? The change in your buying patterns or quantities could be a signal, but there’s a lot more. For instance, a supermarket will want to convince you that it stores everything you ever need. To do that, it will mine, buy and analyze everything it can learn about you – and that’s the way to build loyalty in the data-driven economy.

In this article, Charles Duhigg, the author of The Power of Habit: Why We Do What We Do in Life and Business, explains how data analysis can produce unbelievable results.

For instance, the retail giant Target can figure out if a customer is pregnant, irrespective of whether the customer wishes to disclose this information or not.

What your car knows about you

Much as you’d like to drive a ‘connected’ car, there’s a lot of data the car collects about you.

Not surprisingly, you don’t know what all the car knows about you, where’s the data sent to, how it is processed and used and whether you can do anything about it.

Yes, this data can be very useful for stuff like service reminders, vehicle usage patterns and other things that can make your driving safer and more pleasurable. That said, there’s a good deal of information that seeps out without you know it, including the information you may be reluctant to share otherwise.

The Zebra created a neat infographic about what you car knows about you. Here’s some of the things your car knows about you:

  • Your phones and text messages: Does your car system read out the text messages you receive? It likely stores data from that.
  • Your home – or permanent location: Do you drive to a certain place regularly after work? Your car interprets that location as your home.
  • Your driving skills: Do you brake too often? Are you wearing seatbelts? How about you checking your phone while driving? Do you overspeed often? Depending upon the make and the model of your car, some or all of this – and more – is tracked and stored. Car black boxes, remember?

Your fitness app could be a felon too – unknowingly

When relatively low-tech areas like cars and supermarkets can collect so much data about you, how can you expect smartphone apps to not probe further and practically know you inside out?

In the remote likelihood that you might have forgotten what happened with the fitness app Strava, here’s a quick recap:

Strava, like most other fitness apps, encourages users to record their activities and let the app access its location.

Why?

If I live in downtown Bronx and I see someone in my neighborhood has jogged 3 miles today, it rubs my ego the wrong way and makes me run 3.5 miles. Good intentions, basically.

Unfortunately, it led to what is termed a security threat. A number of US military personnel are Strava users too. When they let the app access their locations, they inadvertently disclosed where they themselves were stationed. That also exposed where American military was located currently and also their supply and logistics routes.

Not exactly something you’d like to be proud of, right?

If you’re wondering why military personnel divulged the details, here’s one of the many likely explanations:

Apps like Strava make people pore closely over every single calorie they burnt, every step they walked, every yard they pedaled.

Exercising produces the opioid hormone called endorphin. Such hormones lead to a feeling of euphoria (remember the aha feeling after a round of sit-ups?).

This euphoria may be the culprit in making disciplined military personnel share their locations on Strava.

Privacy Policies of apps

Apps reminds you of privacy policies, right? The ones that must “I agree” before you can use the app.

Well, they could do with a bit of simpler wording. A post rightly observed how Privacy Policies, written in almost unreadable legalese, are nearly impossible to read.

Here’s three things you must understand about Privacy Policy (they stand out like a sore thumb):

  1. Level: The post referred to above found that the average grade level (of language used) of a Privacy Policy is 11.5 grade level. Not exactly simple stuff, if you recall Americans read at an average of 8th grade.
  2. Size: Even at 250 words per minute, you’d need over 17 minutes to just read Facebook’s Privacy Policy. This time doesn’t account for the time you’d need to comprehend whatever is written. Now recall all the sites and all the apps you use and ask yourself if you actually read – and understand – those Privacy Policies. It’s pretty much sure you don’t read them.
  3. Variety: Nearly 6 million apps are estimated to be on Google’s Play Store and Apple’s App Store. Some of these apps are crafted by one and two-person teams, while some others are crafted by large corporates and conglomerates. They bring different levels of commitments and understanding of Privacy Policy.

Conclusion

From a pessimistic point of view, there’s not going to to be any data privacy if you use any online tools. At least not the way it was back in the 20th century.

Data is the new currency with which you pay for the usage of some apps. It doesn’t matter if the app is free (Facebook app) or paid (Procreate or Pocket Casts) – your data will always be at risk.

For instance, it’s clear that Facebook knows a lot more about you than you’d every believe. Not only that, Facebook may be sharing your data with others secretly.

Apparently, there was never a better time to use the old parting words:

Take care.

The post From cars to supermarkets to apps: How much everyone knows about you appeared first on Technology services news.


6 ways Facebook 10 year challenge is helping their AI ambitions

It’s unlikely you haven’t noticed the Facebook 10 year challenge, if you’re on any social media platform. The apparently simple exercise requires you to post your 2 photos side by side, one from ten years back and the other your current photo. The company claims it’s a user-generated meme movement, which isn’t impossible. However, few […] The post 6 ways Facebook 10 year challenge is helping their AI ambitions appeared first on Technology services...

It’s unlikely you haven’t noticed the Facebook 10 year challenge, if you’re on any social media platform. The apparently simple exercise requires you to post your 2 photos side by side, one from ten years back and the other your current photo.

The company claims it’s a user-generated meme movement, which isn’t impossible.

However, few people realize how Facebook’s 10 year challenge could possibly benefit the company’s Artificial Intelligence ambitions in face recognition.

What is the Facebook 10 year challenge?

On the face of it, the Facebook 10 year challenge is a simple exercise. You find out your photo from about 10 years back (i.e. 2009). Then you place it next to your most recent photo (i.e. 2019 photo).

The juxtaposition shows how you have changed over the ensuing 10 years. Your friends and followers can view this and leave back their comments or reactions. This fun exercise can trigger emotions like nostalgia.

Sounds pretty innocent and harmless, eh?

Let’s see.

A quick overview of facial recognition

Basically facial recognition technology identifies people from digital images. Primarily, it can perform the three functions below:

  1. It stores the image of a person for future reference. While storing the image, the system may use or more techniques like relative position of features, skin texture, 3D recognition and so on to associate the image with an identity. Here, the system plays the role of a reliable database.
  2. It can identify who is the person is even if the person is unable to or unwilling to disclose their own identify. This done by matching the current images with that in the existing database. Here, the role of the technology is that of an identifier.
  3. It can verify and validate whether the person in front of the camera is the same person they claim to be. This too is done by matching the image of the person with the image in the database. Here, the technology plays the role of a verifier.

Wikipedia and other sources credit Woody Bledsoe, Helen Chan Wolf, and Charles Bisson as the pioneers of the technology behind automated facial recognition technology.

It has widespread applications, from policing and preventing frauds by stopping miscreants from entering a public event to simpler goals like school or factory attendance.

How AI is trained

If you thought facial recognition is as simple as overlapping one image over another, you couldn’t be more wrong.

It doesn’t happen that way.

No two different images will match precisely in terms of lighting, head tilt, distance and so on. As a result the recognition system must be smart. It must be able to take a few decisions itself, based on what it learns.

This learning can be enriched only if you feed the system with a variety of data. Data collection for this training is time-consuming and therefore expensive. And yet, you can’t always get the variety of data you want.

What if you could get all this for free?

That’s exactly what is happening. You could be training Facebook’s AI with the 10 year challenge.

For free.

Like hundreds of thousands doing it everyday.

What could have cost Facebook billions of dollars, you’re willingly doing it for free.

By participating in the Facebook 10 year challenge, you are not just giving FB access to your photos, you’re basically training its AI system.

How the Facebook 10 year challenge is helping Facebook’s AI training

“…how all this data could be mined to train facial recognition algorithms on age progression and age recognition” The New York Times reported this now-famous tweet by author Kate O’Neill.

If you look at the Facebook 10-year challenge with a suspecting eye, a lot of pieces fall into place:

  • Amount of data: The systems of FB would need tons of data to perfect AI. And that huge amount of data could have been very, very expensive. But never mind, you, the FB user, are giving away the same data absolutely free.
  • Variety of data: No way Facebook AI systems could have got a varied data on their own. With the 10-year challenge having gone viral, FB is flush with not just data but also quantum of data.
  • Virtually free of cost: It’s difficult to exaggerate the importance of getting all this data at not cost to Facebook. It’s like receiving a free studentship to Harvard for your entire city.
  • No legal hassles: Had FB initiated the 10 year challenge (their officials claim Facebook hasn’t started the challenge), they would have had tremendous problems getting clearances. But since it’s user-generated, there are no clearances to seek, no regulation to comply with.
  • Speed of data collection: If Facebook had initiated a training drive for its AI, it would have taken a long, long time to collect this variety and quantum of data. But with the 10-year challenge, it’s a windfall for Facebook. Presto! They have all the data they want, without the wait.
  • Lead over others: Spending almost no money and generating millions of images for AI training gives Facebook an almost unbeatable lead to Facebook over any current or potential competitors. Don’t forget Instagram is also owned by Facebook.

What do you think? Is Facebook getting an upper hand in the race?

Image source: Photo by Glen Carrie on Unsplash

The post 6 ways Facebook 10 year challenge is helping their AI ambitions appeared first on Technology services news.


Facebook shared your data with others – secretly

A recent investigation by The New York Times indicates Facebook may have given access to more data to other companies than it told anyone about.  Is it possible that Facebook has been cheating on millions of its users worldwide and also governments and investigating agencies? A recent investigative report by The New York Times clearly points […] The post Facebook shared your data with others – secretly appeared first on Technology services...

A recent investigation by The New York Times indicates Facebook may have given access to more data to other companies than it told anyone about. 

Is it possible that Facebook has been cheating on millions of its users worldwide and also governments and investigating agencies? A recent investigative report by The New York Times clearly points Facebook has been sharing more “intrusive access to users’ personal data” than it told its users or governments about.

Worse still, Facebook may have been doing this for years now.

TL; DR

  • Facebook told users and government authorities that it has, without user consent, either not shared data of its users or has stopped sharing data.
  • Some of the large corporates that have benefited from this data-sharing include Netflix, Microsoft, Amazon, Sony and Yahoo. 
  • Facebook may have used used unfair means to justify this sharing of info.

The New York Times investigation 

On December 18, 2018 The New York Times published a well-researched article about how Facebook has been handling users’ data. As per the post, there’s a huge gap between what Facebook tells its users and authorities about the way it shares users’ data and the way it actually does

Apparently, Facebook has been giving access to its users’ data to some of the largest companies in the world.

Companies that have had access to Facebook users’ data include Amazon, Bing, Sony, Netflix, Spotify, Royal Bank of Canada, Yahoo… Needless to add, this access appears both illegal and unethical.

The list of companies that had access to Facebook users’ data reads eerily like Fortune 500.

What data different companies had access to under special arrangements with Facebook

The New York Times investigation showed that Facebook had made several deals with over 60 brands of smartphones, tables and other devices to let these makes have access to Facebook users’ data. Here’s a list of what kind of data was available to some of them (note that all this, without users’ permission, was illegal):

  • Microsoft’s search engine Bing to see names of friends of almost all Facebook users, without consent from users
  • Netflix was allowed to use/read/delete Facebook users’ private messages
  • Amazon was given access to users’ name and contact information through friends
  • Yahoo was allowed to view streams of friends’ posts (Facebook claimed this was stopped much earlier, but this happened this summer)
  • Spotify could read/write/delete users’ private Facebook messages
  • Sony could obtain users’ email addresses through their friends
  • Royal Bank of Canada was allowed to read, write and delete users’ private messages
  • Yahoo could view real-time feeds of friends’ posts for a feature that Facebook claimed to have stopped in 2011
  • Apple was given the power to hide from Facebook users that its instruments were asking for users’ data
  • Rotten Tomatoes had access to data from a Facebook feature that was discontinued
  • Yandex, the Russian company that is into search engine and ecommerce, enjoyed access to Facebook data
  • Huwaei, the company that was marked as a security threat by US intelligence, enjoyed special privileges since Facebook listed it as a partner

Why Facebook’s data sharing is serious

Essentially, Facebook allowed the companies mentioned above, and many more, access to users’ data without express permission from users.

Not only that, it appears Facebook had not been fully honest in what it disclosed to authorities. 

The biggest reason it is unfair and unethical (and possibly illegal) is this: the companies that were given access to Facebook user’s data were termed partners and were accorded special status. As a result, they were not subjected to extensive privacy program reviews. 

In other words, Facebook seemed to have relaxed its rules for these companies.  

Here are some other reasons why Facebook’s sharing of data is unfair and unethical:

  • The data shared is used upon the users themselves in the form of redesigned products or commercials. Either way, the “stolen data” reduces free choice of users.
  • The models, used by Facebook and Google, begin by assuming that secretly using data is the only way to make money and that no one would pay for their services. 
  • Siphoning away users’s data is a lot like driving a car without license – just because you have access to a car (data) doesn’t mean you can drive it without the right authority (user consent). Even if you’ve hurt no one, you’re still guilty.
  • It is bringing to life our fears that the moment you step into the online world, you kiss goodbye to your privacy. 
  • Facebook does not provide any auditing services of its partners. As a result, it cannot ensure if the right to be forgotten is to being followed the right way.
  • Once the data is out of hands of Facebook and into the hands of third-parties, the data collector (Facebook) ceases to have any control over the data. As a result, how the data will be abused becomes unpredictable.

How is Facebook defending its actions

Facebook spokespersons are not sitting silently; they have been issuing their own versions of the truth and offering justifications and explanations. 

Here are some of the explanations Facebook is putting up in its own favor:

  • Facebook has found no evidence of data abuse by its partners.
  • For most of the partners, Facebook was not required to ask for consent from users. That was because the partners were treated like extensions of Facebook.
  • Some data, Facebook says, was in case public data and sharing the data did not violate users’ privacy rights. 
  • Facebook had hired PriceWaterhouseCoopers (PwC) to evaluate its data handling practices and PwC has found nothing wrong.

Source

The post Facebook shared your data with others – secretly appeared first on Technology services news.


Social Credit System China Part 2: Implementation, Benefits, Criticism

In Part 1 of China’s Social Credit System, we covered the basics of the credit system of China. We talked about the weaknesses of the current credit system in China and compared the current credit score system in developed countries like US, Germany, Switzerland and so on. Next, we identified the 4 principles behind the […] The post Social Credit System China Part 2: Implementation, Benefits, Criticism appeared first on Technology services...

In Part 1 of China’s Social Credit System, we covered the basics of the credit system of China. We talked about the weaknesses of the current credit system in China and compared the current credit score system in developed countries like US, Germany, Switzerland and so on.

Next, we identified the 4 principles behind the proposed system and the objectives the system seeks to achieve. We ended with an infographic on the 14 focus areas of the system.

In this 2nd and final part, we talk about how the social credit system of China will be implemented, what are its benefits – from the point of view of the Chinese government – and what are the criticisms leveled against the proposed system.

Implementation of China’s Social Credit System (SCS)

The Social Credit System of China has the goal of establishing the basic structure of a credit system by 2020. That goal wishes to achieve objectives like:

  • Raise awareness and level of credibility within the society
  • Regulate efficiently the economy without compromising governmental control
  • Improve and perfect the socialist market economy
  • Make stronger the societal governance program
  • Bring innovations in the financial sector with digital governance
  • Control and direct the behavior of individuals and businesses

The time-line of the history and implementation of China’s Social Credit System can be roughly represented in the following way:

  1. 1997: The “Bank Credit Registry and Consulting System” is established.
  2. 2002: A report delivered by the then President Jiang Zemin calls for establishment of a social credit system.
  3. 2006: China’s central bank The People’s Bank of China(PBoC) establishes a Credit Reference Center.
  4. 2007: The State Council sets up an inter-ministerial joint conference for the setting up of the Social Credit System.
  5. 2010: The Suining county in Jiangsu, China, introduces a mass credit plan that tracks stuff like individual conduct, law abidance, compliance to laws and debt repayment and turns it into a score. Higher the score, better the credit-worthiness.
  6. 2013: China’s Supreme People’s Court (SPC) comes out with a blacklist of defaulting debtors. The list isn’t small – there are around 32,000 names.
  7. 2014: A plan with the title “State Council Notice concerning Issuance of the Planning Outline for the Construction of a Social Credit System 2014-2020” is released.
  8. 2015:, The PBoC plans to licenses to eight private-sector companies to begin trial of the social credit system.
  9. 2016: An MoU is drafted between various Chinese bodies define roles and exchange information.
  10. 2017: The plans to hand out licenses to private-sector companies are dropped. The major reason cited is the conflict of interest or the lack of willingness of these companies to share their information with competing companies.
  11. May 2018: Individuals with poor scores are beginning to get denied air-tickets, high-speed train travel, luxury hotels and similar services and products.
  12. December 2018: A number of pilot projects are in operation across China but a single integrated system isn’t present – or at least visible.

Technology adaptation in China’s SCS

Naturally technology will play a key role in building of the credit system. The success of the entire system is almost entirely dependent on how data will be collected, sorted and used.
Exactly what technology will be used – or is already in use – is not clear at this stage. And that is partly understandable: if the authorities were to expose everything, the risk of gamification of the system would increase manifold.
Nevertheless, there is some understanding of the technology deployed for China’s Social Credit System.
Cor-relational Big Data analysis: Powerful computational facilities are being used to collect, store, process and share data. These hi-tech infrastructure will also produce actionable insights and generate result using probabilistic capabilities.
Face recognition: About 200 million cameras will be (or are being) used to collect and improve facial recognition. Read more about how face recognition is being used China in this post.
Bio-metric identification: The bio-metric database is being constantly expanded. New data is being appended to existing state records adn corrections, where required, are being made.
License consolidation: Efforts are being made to use technology and consolidate multiple registrations of the same identity. Earlier, a business use do have one registration number for tax, another for benefits, another environmental clearance and so on. the authorities would like to turn all this into a single number.
ID card or Unique Numbers: Individuals as well as businesses are brought under the social credit system. There will be a 18-digit unified social credit score.
Honest Shanghai model: The local municipal government uses an app called Honest Shanghai. It allows people to see the credit scores of local businesses or also their own scores. While tere are no punishments for bad behavior, good behavior brings rewards like discounts, priorities in facilities and so on.
No anonymity: The government is tightening requirements so most forms of anonymous activity is slowly disappearing.
Benefits of Chinese Social Credit System (SCS)
For all the Orwellian labels put on the SCS, there are some strong benefits the system enjoys.
Here are some of the most remarkable benefits of China’s Social Credit System, at least form the point of view of Chinese government:
  1. Credit history: Unlike Europe or the USA, the usage of credit cards in China is modest and also very recent. As a result, people did not have a credit history – at least not very detailed or structured. The SCS is expected to create a detailed, reliable credit history of the people of China.
  2. Information filter: While governments across the world struggle to gag inconvenient news, China handles it the tough way. Li Hu’s writing and Liang Xiangyi’s eye-roll were not taken lightly. The SCS provides strong insulation against spread of dissent.
  3. Social order: China’s population is not modest by any standard. Establishing and maintaining consistent social order that is productive for the economy is something the SCS wishes to achieve.
  4. Tighter control: With the kind of punishment for SCS offenders not many would want to take chances. That helps China build tighter political control.
  5. Problem solving: The huge and diverse Chinese populace needs all the management and decision making tools avialable. The SCS will provide critical economic inputs and data for a planned growth that China’s communist government holds very close to its heart
  6. Administrative Efficiency: The SCS is built of digitization and informationalisation. Naturally it will improve administrative efficiency.
  7. Real time data collection: It is plain as daylight that the SCS will provide unmatched, real time data that can be used for efficient and timely problem solving.
  8. Integrated ecosystem: The various data collection tools the government deployed till date were at best fragmented. The SCS will provide s single ecosystem to collate all data meaningfully.
  9. Party Grip: The SCS smartly leverages people’s reliance on social media, travel, going out, luxurious items and so on. Because offenders can’t access these services, they toe the line. The SCS heals a serious problem with an apparently gentle punishment.
  10. Corruption Discouraging: The system may grow so strong that all public officials would be very scared of being corrupt.

Criticism against the Social Credit System of China

1. Arbitrary abuse: The SCS isn’t always clear what can hurt much. This makes the system susceptible to inequitable punishments, to say the least.
2. Fear of Associating: If your dad or your friend criticizes the government, your score will go down too.
3. Transparency issues: There’s little to tell you how you can improve your scores.
4. Personal choices: Individuals do not have much of personal choice – they must remain geared for the national agenda. For instance, you could be punished if you spend too much time on social media.
5. Political victimization: Li Hu is a scary example of what could happen with you if you are critical of the government. And this was before the SCS was in force. Imagine what’d happen after the SCS comes into force.
6. Gamification: There are debates on how difficult it would be to gamify the system. There are also concerns that the wealthy and powerful may be able to use their positions and find means of artificially improve their ratings.

 

Sources used include:

  1. Wikipedia
  2. IberChina
  3. Science Alert
  4. The Guardian
  5. Ms Samantha Hoffman’s writings

 

 

The post Social Credit System China Part 2: Implementation, Benefits, Criticism appeared first on Technology services news.


Interview with Michelle Urban of Marketing 261

“Marketers today must be part artist, part scientist” says Michelle Urban from Marketing 261. In that small phrase, she packs a lot of punch, as also what the future holds for marketing. From the days when marketing meant sending out fancy ads and offering great discounts, it has evolved into a craft, a complex profession with […] The post Interview with Michelle Urban of Marketing 261 appeared first on Technology services...

“Marketers today must be part artist, part scientist” says Michelle Urban from Marketing 261. In that small phrase, she packs a lot of punch, as also what the future holds for marketing.

From the days when marketing meant sending out fancy ads and offering great discounts, it has evolved into a craft, a complex profession with science and art in almost equal measures. Internet and its metrics and tools of measurement are fast making marketing an exact science.

On the other hand, the unprecedented changes that technology keeps bringing in our lives keeps marketing from becoming a predictable, hum-drum activity.

We spoke to Michelle to understand what she thought of brick-and-mortar businesses embracing the internet, AI, executive buy-in and a lot more.  Here goes: (Scroll down for an infographic.)

 

Marketing261-Michelle-Urban

We hear Content is King so often. In a crowded marketplace, how do you suggest bringing readers to your blog when everyone is producing a lot of content and when readers’ attention span is continually shrinking?

Don’t write for the sake of writing. Write only for your target audience. Write about how they can work through their challenges, pain points, and obstacles. Write about how they can reach their goals and how they can be more successful in their job.  Give them useful and practical content.

Write about how they can reach their goals and how they can be more successful in their job.  Give them useful and practical content.

If your readership is quickly skimming your content and bouncing off, your content is not geared towards their needs. When it’s not relevant or interesting chances are readers are not going to engage or return back. Make your content inspiring and educational to your target audience.

There are still a large number of successful, brick-and-mortar businesses that haven’t embraced the digital space. How do you think they should go about building their brand online and make sure their voice is heard, especially if even their customers aren’t frequent on the internet?

In this day and age, it’s silly for anyone NOT to have a website. Website show credibility and when done correctly, social credibility. All websites should be optimized for mobile and local search.

What are the three skills you rate as most important for a digital marketer in today’s world?

1. Be resourceful
2. Be part artist AND part scientist.
3. Be a risk taker

1. Be resourceful
2. Be part artist AND part scientist.
3. Be a risk taker

Businesses have begun investing in digital marketing, but there’s still some resistance when it comes to paying for tools and services that don’t directly lead to marketing  (e.g. SEO tools, email verification, analytics tool etc). How should marketers go about getting top-level executive buy-in for such matters?

Whether a marketer is asking for new tools, to sponsor an event, invest in new programs, or double down on an existing channel, the best way to get buy-in is by letting the metrics do the talking. Break down how the line item is going to help reach the company goal.

Executives speak one language and that is revenue. Provide the details that support the positive ROI. If you cannot show this, chances are you don’t need it.

Executives speak one language and that is revenue. Provide the details that support the positive ROI. If you cannot show this, chances are you don’t need it.

For the few industries that don’t expect too much of business coming from online inquiries (e.g. heavy engineering) in the next few years, how do you suggest they should go approach their online marketing efforts?

Your brand matters – bottom line. In today’s day and age, your brand needs to expand to the web in some way shape or form. If you’re in a field that is not web forward, chances are a potential buyer is going to be Googling something pertaining to your brand – the owners, the investors, the competitors. Having an online presence, like a website, can show credibility and social proof.

Artificial Intelligence (AI) is fast becoming a threat to many professions. How do you think marketers and freelancers can tackle that?

I can see many marketers using AI to increase their productivity. AI algorithms can help automate the repetitive tasks that many freelancers do on a weekly or monthly basis. This leads to increase productivity, which saves time and money.

Do you think customer loyalty will be a realistic goal to pursue over the next few years, given the enormous competition everywhere?

Yes, without a doubt. Companies should put their customers as #1 priority. This means building meaningful and lasting relationships with your customers and users.

Companies should put their customers as #1 priority. This means building meaningful and lasting relationships with your customers and users.

So even if they move away from being your customer, they can still help to promote your brand positively due to the great experience they had while engaging with your product/service. Always leave the door open for your customers to return to you quickly.

Infographic on marketing

michelle-urban-marketing261-interview-takeaways

About Michelle:

Michelle Urban is the founder of Marketing 261, a marketing shop for tech startups and small businesses. With a hands-on, get-it-done attitude, she and her team focus on executing measurable plans to get real results. For over 16 years, she’s built scalable marketing programs for demand creation, lead generation, customer advocacy, and engagement. A few of her clients include productboard, Rancher Labs, Layer, BetterManager, and more.

The post Interview with Michelle Urban of Marketing 261 appeared first on Technology services news.


Link to Category: News & Media Blogs

Or if you prefer use one of our linkware images? Click here

Social Bookmarks


Available Upgrade

If you are the owner of Almostism, or someone who enjoys this blog why not upgrade it to a Featured Listing or Permanent Listing?


Use Blogger/Blogspot? Than submit your blog for free to our blog directory.