Blogging Fusion Blog Directory the #1 blog directory and oldest directory online.

Almostism

Home Almostism

Almostism

Rated: 1.00 / 5 | 504 listing views Almostism Blogging Fusion Blog Directory

India

 

General Audience

  • Mayank Batavia
  • October 09, 2018 05:15:39 PM
SHARE THIS PAGE ON:

A Little About Us

The blog occasionally discusses digital marketing and AI. But it mostly discusses laws that impact the way we use technology.

Listing Details

Listing Statistics

Add ReviewMe Button

Review Almostism at Blogging Fusion Blog Directory

Add SEO Score Button

My Blogging Fusion Score

Google Adsense™ Share Program

Alexa Web Ranking: 8,557,181

Alexa Ranking - Almostism

Example Ad for Almostism

This what your Almostism Blog Ad will look like to visitors! Of course you will want to use keywords and ad targeting to get the most out of your ad campaign! So purchase an ad space today before there all gone!

https://www.bloggingfusion.com

.

notice: Total Ad Spaces Available: (2) ad spaces remaining of (2)

Advertise Here?

  • Blog specific ad placement
  • Customize the title link
  • Place a detailed description
  • It appears here within the content
  • Approved within 24 hours!
  • 100% Satisfaction
  • Or 3 months absolutely free;
  • No questions asked!

Subscribe to Almostism

Cases against Google in different countries

Despite wielding almost unbelievable power across the world, Google is contesting a number of litigation cases in a number of countries. Not long ago, there was some upsetting news that Google reads your gmail messages. The following Google court cases mostly include instances of data privacy violations. Google antitrust incidents aren’t that well-known. Alternatively, here’s […] The post Cases against Google in different countries appeared first on Technology services...

Despite wielding almost unbelievable power across the world, Google is contesting a number of litigation cases in a number of countries.

Not long ago, there was some upsetting news that Google reads your gmail messages.

The following Google court cases mostly include instances of data privacy violations. Google antitrust incidents aren’t that well-known.

Alternatively, here’s an infographic on Google court cases in different countries.

Google court case fact file

2005: United States

On August 25, 2005, the US Department of justice ordered Google to follow with a subpoena. The subpoena required Google to share “the text of each search string entered onto Google’s search engine”
Outcome: The court ruled in Google’s favor, considering the privacy implications of sharing the search terms.

2010: Italy

Charges: Privacy violations on the grounds of a video that showed a physically challenged minor being bullied by the minor’s classmates.

What it led to: The minor’s parents accepted an undisclosed sum as financial compensation. Three Google managers were suspended.

2010: Germany

Germany has asked Google to hand over harvested data, the one Google Street View collected from private wi-fi networks by May 26, 2010. Google failed to adhere to the deadline.

Outcome: Google handed over the data in June 2010.

2010 Czech Republic

What happened: The Czech Office for Personal Data Protection prevented Google Street View from taking pictures beyond “ordinary sight from a street”.

Outcome: Google prevailed, but with some restrictions. In 2012, Google could restart taking pictures, albeit conditions apply.

2012: United States

One of the more well-known Google litigation incidences. The Electronic Privacy Information Center (EPIC) filed a complaint against Google before the Federal Trade Commission (FTC). Consequently FTC filed administrative proceedings against Google.

Charges against Google: The FTC claimed Google had violated the FTC Act by “convert(ing) the private, personal information of Gmail subscribers into public information for the company’s social network service Google Buzz…”.

The FTC felt Google had “misrepresented to users of its Gmail email service” and violated its own privacy promise.

Outcome: Google agreed to pay a civil penalty of over $20Mn but it denied was guilty.

2013: United States

What happened: Google had bypassed privacy settings of Apple’s Safari browser and was tracking users without their knowledge.

Outcome: This one went beyond fines. It made Google agree to things like not bypassing a browser’s cookie settings and ensuring the cookies would expire. Again, Google did not confess being guilty.

2014: Spain

Situation: An individual requested Google Spain that his name be removed from search results that appeared in connection with forced sales, saying the event had appeared long back and was no longer current or relevant.

Outcome: The Court of Justice of European Union ruled in favor of the individual, upholding his right to be forgotten.

2015: United Kingdom

Ruling by the Court of Appeal allowed British consumers the right to sue Google in the UK on the grounds of misuse of private information.

Major source: Wikipedia

The post Cases against Google in different countries appeared first on Technology services news.


Microsoft’s report on AI in Europe, with findings and criticism

There’s absolutely no two opinions that whichever country or region masters Artificial Intelligence (AI) will dominate business and pretty much everything else for a long time to come. That’s absolutely why China has been working so hard on AI. China Artifical Intelligence Plan is a hot topic through and through. That, despite the fact that […] The post Microsoft’s report on AI in Europe, with findings and criticism appeared first on Technology services...

There’s absolutely no two opinions that whichever country or region masters Artificial Intelligence (AI) will dominate business and pretty much everything else for a long time to come.

That’s absolutely why China has been working so hard on AI. China Artifical Intelligence Plan is a hot topic through and through. That, despite the fact that AI has the potential to cause job losses (or rather, a radical shift in job profiles).  And job losses is something that China as the world’s most populous country can’t afford.

A lot has been said and analyzed about the way China has been trying to leverage AI. In fact, China’s strong social credit system is being built using significantly the tools that only AI can give.

And in all that din, even industry observers almost forgot Europe.

So what’s the status of Artificial Intelligence in Europe? What is the EU Artificial Intelligence strategy?

Today, we analyze Microsoft’s report on AI in Europe.

Background of the AI report

Some time back, Microsoft commissioned Ernst & Young (E&Y) to carry out a study on the status of AI in Europe with Sweden as a special focus. This report, formally titled “Artificial Intelligence in Europe How 277 Major Companies Benefit from AI Outlook for 2019 and Beyond”, is the summary of the findings of the survey.

The study was carried out in 15 countries of EU and 277 companies participated in the study. Most findings are classified in two parts: 15 European markets and Sweden.

It’s best to treat this report as a study that shows you the overall picture but isn’t highly accurate at the granular level. Also, it wouldn’t be a good idea to treat the findings as truly representational of the entire European Union (EU).

How perfect is the report?

Like almost all surveys, this survey too suffers from biases.

What’s more, there are certain interpretations in the study that not everyone would agree with; for instance, it calls 44% a ‘majority’ whereas common sense requires you call something a ‘majority’ only when the percentage crosses the half-way mark, i.e. 50%

But that doesn’t mean the report is of no value – it provides great insights into the state of affairs of artificial intelligence in the countries surveyed.

It’s a wonderful peep into what goes inside the participating companies, what’s their stand as regards AI, how equipped they are, what all things are at stake, what level of preparedness the companies are, how these companies view the various challenges and opportunities ahead in the light of AI and much more.

The structure of report on Artificial Intelligence in Europe

The report is divided into five major sections excluding the preface.

Section 1

The first section begins by offering an executive summary of the findings. It lists out the participating companies and the research methodology. It moves on to defining what all technologies are included in the study and concludes with an overview of investments in the field of AI in Europe.

Section 2

The second section deals with the role of AI in European markets. It begins by showing at what level of the participating companies is the AI dialogue taking place. It explores the maturity and preparedness levels of AI within these companies based on what stage these companies are in their pursuit of building a competitive advantage through AI. It concludes with laying out where AI is deployed across the companies.

Section 3

The third section starts off by spelling out the expectations the companies have from AI over the next 5 years and how closely those expectations are related to the core business of the companies today.

The next questions posed are key: what is a good framework to milk the benefits of AI and what are the sector-wise benefits from AI. The last part talks about the risks involved with AI.

Section 4

The fourth section defines exactly eight competencies companies would need to really leverage the true potential of AI. Each of the competencies were discuses on the basis of how the companies view their own readiness with respect to these competencies.

Section 5

The fifth and the final section is the shortest – it analyzes how the companies can take AI further.

Key findings of the Microsoft report on AI Europe

Here are the top 7 findings of the Microsoft’s report on AI in Europe:

Importance

1.Data: A total of 71% companies reported that Artificial Intelligence is an important topic at the executive management level (or C-level). Against this, 28% of the companies reported that the AI was an important topic at the non-managerial or employee level.

Interpretation: AI is still largely top-down rather than bottom-up. A great deal more people at the top than at the junior level believe AI is important.

Distribution

2. Data: The UK, France, and Germany have attracted 87% of investment in AI companies over the past decade.

Interpretation: The AI scene in Europe is nowhere close to even growth when you see 3 of the total 25 member countries attracting nearly 9 out every 10 dollar invested.

Investments

3. Data: Private Equity and Venture Capitals (PE & VC) account for 75% of the total investment that has poured in over the past ten years.

Interpretation: One, AI is a high risk high return business, since more PE & VC firms than established corporates are investing in AI. Two, these established corporates, at this stage, don’t believe investing in AI must be their top-priority.

Status

4. Data: Of the companies that responded, 45% claimed they were at an advanced stage in AI, meaning that for these companies, Artificial Intelligence was “contributing to many processes and …. enabling quite advanced tasks”.

Interpretation: While heart-warming, the actual number is far too low to pinpoint to any trends. Only 20 companies responded, of which 9 said they win an advanced stage.

Deployment

5. Data: AI is deployed the most in the IT department (47%), while it’s least deployed in general management (4%) and HR (7%). Deployment in commercial activities (Sales, Marketing, Customer Services) is around 20%.

Interpretation: The relatively low deployment in Sales (19%), Marketing (22%) and Customer Services (24%) is a surprise, considering that chatbots have been a sort of rage all through 2018.

Impact

6. Data: Of the surveyed companies, 81% believe AI will have a high or significant impact on their industry over the next five years.

Interpretation: This is well corroborated by another data in the report: 21% of the companies believe AI is not important or only slightly important among their digital priorities.

Usage

7. Data: 74% of the companies surveyed expect to use AI to predict things about their business.  

Interpretation: As we earlier noted, less than 1 in 5 (19%) companies are deploying AI in Sales. So here you have a paradox: On the one hand, you have about 3 out of every 4 (74%) companies expecting to use AI to predict things. On the other, your deployment in predicting and finding more about future sales trends is at a low 19%. Simply put, companies are using AI predictions a lot more for other things than for predicting sales trends.  

Some challenges to the Microsoft report

The report, while carefully researched and put together, certainly has its set of drawbacks. Some data could have been presented better while there are some interpretations that you’d not fully agree with.

It’s important, therefore, to look at the report with a healthy pinch of criticism.

Here are the four major weaknesses of the report:

1. The way companies are chosen for the survey is far from perfect.

A pie-chart on page 15 of the report shows the number of online surveyed companies per country. While most other countries have about 20 companies each in the survey, UK, France and Germany have a total of 15 companies.

Page 21 of the same report mentions these three countries have attracted 87% of investment in AI companies over the past decade.

Effectively, countries attracting 87% of the total investment get less than 5.6% of the total representation in the study.

Question unanswered: How can you explain why countries with an overwhelmingly high proportion of investment are so heavily under-represented?

2. The findings could have been worded better.

This one relates specifically to Sweden.

The bar-chart on page 21 says TMT (Technology, Media, Telecom) is most active, just behind PE & VC when it comes to investment.

However, a closer look reveals a significantly different picture.

The value per deal of PE & VC or TMT $7.2Mn/deal and $8.3Mn/deal is nowhere close to the top. Life Science, for instance, booked 12.09Mn/deal, nearly 50% more than that of TMT.         

At $30.6Mn/deal, the average investment per deal is the highest with Infrastructure.

Question: What are the grounds of labeling TMT most active, just after PE & VC, when the value per deal is much higher in Infrastructure, Industrial Products and Life Sciences?

3.   Since when did 44% become a majority?

On page 28 of the report, it says “The majority consider AI to be important” pointing to a small graph below. The number this caption points to is 44%. Traditionally, one uses the term majority only when the percentage is more than 50%.

Question: Any specific reasons the report calls 44% a “majority”?

4.  The interpretation comes close to being self-contradictory.

On page 61 of the report, it says, “A large proportion of companies consider themselves to have limited or no AI Leadership competency”.

Well, there’s another way of looking at the same data.

And the interpretation would come out exactly the opposite.

In the same graph a total of 64% (32% + 23% + 9%) of the companies rate themselves as Moderately (or higher) competent.

Question: What could explain the basis of saying a large number of companies consider themselves to have limited or no AI competency, when the converse is more accurate?

What Microsoft and/or E&Y say about the gaps in the report

As regards the four challenges mentioned above, we sent emails to Microsoft and E&Y executives mentioned in the report and requested their response on 12 April 2019.

Our email to them was met with an auto-responder saying they’d respond after 6 May. No one from Microsoft has responded; one E&Y official, associated with the report, has offered to discuss the matter.

We’ll update as we learn more.

The post Microsoft’s report on AI in Europe, with findings and criticism appeared first on Technology services news.


Is Your Private Browsing Really Private? (Infographic)

Every time you go online to use an app or a website, you’re sending out your own personal data—sometimes without even knowing it. As more people grow cautious about where they submit their data, services will offer ways for their users to “protect” their data. Private Browsing- Most browsers offer this option for users to […] The post Is Your Private Browsing Really Private? (Infographic) appeared first on Technology services...

Every time you go online to use an app or a website, you’re sending out your own personal data—sometimes without even knowing it. As more people grow cautious about where they submit their data, services will offer ways for their users to “protect” their data.


Private Browsing- Most browsers offer this option for users to anonymize their online activity. It’s helpful if you want to delete your web history as you browse. It also allows you to log onto an account without logging out others. But, private browsing doesn’t protect you from malware and data theft. Although it’s easy to use, it doesn’t delete the information an ISP or WiFi network can collect from you.


Virtual Private Network- This tool sends your encrypted data through a secure network “tunnel.” It’s ideal if you want to anonymize your data while you go online with a public WiFi connection. People that travel often for their work, for example, would enjoy this tool. However, your DNS provider and the intermediate network can still view your information. Not to mention, it’s also complex and expensive to maintain.


Proxy Server- This server is another excellent way to anonymize your information while using public WiFi. It functions as a “gateway” allowing you to transmit information to a web page with some anonymity. But encryption quality varies, and proxy servers can sometimes be slow during high traffic times.


Secure Browser- Secure browsers conceal user’s location and usage. Although these browsers route traffic through their network (making it difficult for people to see where traffic is coming from), it doesn’t mean all of your activity will be private. The pros are it’s difficult to hack and gives access only to sites reachable through secure browsers. However, secure browsers are illegal in some countries. They’re also slower to use, compared to mainstream browsers.


Private Search Engines – Aside from these anonymous web services people use, recently there’s also been a rise in private search engines. Search engines cleverly track your data, because they know that you use it if you’re looking for a quick solution to an everyday task.


For more details about these services above, as well as the pros and cons of each, take a look at this infographic from Varonis.

browsing-anonymously-questions-infographic

The post Is Your Private Browsing Really Private? (Infographic) appeared first on Technology services news.


GDPR after 1 year: Where does data protection stand (with infographic)

It’s been nearly one year since the GDPR came into force. This article discusses where things stand, how GDPR compliant organizations are, what are the reactions and what are the fines. The GDPR (General Data Protection Regulations) was brought into force on May 25, 2018. The twin objectives of the GDPR were: return to data […] The post GDPR after 1 year: Where does data protection stand (with infographic) appeared first on Technology services...

It’s been nearly one year since the GDPR came into force. This article discusses where things stand, how GDPR compliant organizations are, what are the reactions and what are the fines.

The GDPR (General Data Protection Regulations) was brought into force on May 25, 2018. The twin objectives of the GDPR were:

  • return to data subjects the rights of their personally identifiable data, including the right to be forgotten, and
  • bring a uniformity in the different data protection laws of European Union (EU) member countries

The GDPR: An Overview

The GDPR applies to all organizations that operate within the EU. That includes organizations that were founded within the EU or have a business unit located in the EU as well.

But that’s not all.

The GDPR also applies to any organization based or headquartered anywhere in the world if offers its good or services for sell to EU residents. It also applies, without any changes, to organizations that collect information of EU residents.

Organizations were expected to turn GDPR compliant by the due date May 25, 2018.

Non-compliance would lead to fines as heavy as €20 million.

Where the GDPR stands today

It’s important as well as interesting to see how effective the GDPR has been and where do organizations stand when it comes to GDPR compliance.

The official GDPR statement

The European Commission has come out with a statement and some interesting numbers. Here’s a summary:

  1. EU countries must adapt their national legislation to include GDPR. Five countries (Bulgaria, Czechia, Greece, Portugal and Slovenia) haven’t yet done or their processes are yet insufficient.
  2. A total of 95,180 complaints to Data Protection Authorities (DPAs) have been received till the time the statement was prepared and released.
  3. Till the time the statement was released, only three incidents attracted fines. While two fines were no more than Euro 20,000 each, the third fine was a huge Euro 50 million that was slapped on Google.
  4. Tele-marketing, Promotional email and Video surveillance are the three activities that have attracted the maximum number of complaints.

Before we discuss which company was fined how much, let’s make one thing clear: non-compliance doesn’t always attract fines. Regulatory bodies have the discretion to simply issue warning to the erring company and let them off the hook of the issue isn’t serious.

First action under the GDPR

The first company to receive enforcement notice under the GDPR was a Canadian data analytics organization Aggregate IQ (AIQ).

On October 4, 2018, the UK Information Commissioner’s Office (ICO) has issued an Enforcement Notice against AIQ for having allegedly come up with pro-Brexit campaigns while processing data that was incompatible with the GDPR guidelines.

This notice is of interest for a number of reasons. Here are some of them:

  • AIQ has the dubious distinction of being the first company to have received a notice under the GDPR.
  • The allegation is that the data was processed for purpose that was significantly different from the purpose for which it was collected.
  • While the data was processed for reasons different from the reasons with it was collected, there was no lawful basis to do so.
  • The data under question was collected before the date of GDPR coming into force, i.e. May 25, 2018.
  • Apart from having violated the principle of processing data for a different purpose, the UK ICO says AIQ did not tell data subjects their data had been shared by a third party.
  • AIQ is alleged of having violated Articles 5, 6 and 14 of the GDPR.
  • Article 5 and 6 deal with lawfulness of data processing, while Article 14 lays down rules cases when data is collected from sources other than the data subjects.

The significance of the first GDPR notice

You couldn’t have missed the specific Articles under which AIQ was found to have allegedly broken the law.

Note that it isn’t data breach. It’s about the difference between the intents of data collection and data usage.

Firstly, it’s about transparency, since AIQ is alleged to have not told data subjects that their personal data was being shared by a third party.

Secondly, it’s about having given enough time to the organizations to mend their affairs. AIQ had apparently collected the data before GDPR came into force; for some reasons they chose not to turn GDPR compliant.

Thirdly, it points towards the GDPR and related authorities mean business. As pointed out earlier, this wasn’t a case of data breach and yet the ICO swung into action.

Finally, it was a strict warning for those who obtain from third-party sources. Even if you obtain your data from third-party, it’s going to be your responsibility to ensure compliance for GDPR.

What this survey says

A survey conducted by ThirdSector brought out some unusual insights on how the GDPR impacted charities. Of the 176 charity workers surveyed, the following were the chief findings:

  • For nearly one out of every five respondents (18%), the number of people contactable over email dropped by half.
  • A little over half (53%) of those surveyed saw their database reduce in size to some extent. They attributed this reduction in size of database as a result of GDPR compliance.
  • One in every five respondents (20%) believed their telephonic contact list had shrunk.

Sounds too bad? Well, there’s the other side of the story as well.

  •  Seven out of every ten respondents (70%) agreed that the enforcement of the GDPR actually improved their organization’s data protection processes.
  •  Over half (53%) of the respondents said regulations as the GDPR helped them build and improve trust in the charity sector.

Here’s an infographic on the status of GDPR a year after was enforced:

Conclusion

If there’s one thing that the GDPR can be said to have brought, it’s awareness. Organizations, charities, individuals… every entity has at least realized the kind of importance data deserves.

To begin with, organizations have become a great deal more conscious while handling data. While many activities directed towards GDPR compliance haven’t bore fruits, the organization’s efforts at least need acknowledgement.

Next, the way Britain’s ICO has cracked its whip makes it amply clear the GDPR is not going to implemented lightly.

Finally, for some unknown reason, compliance remains a challenge despite the fact the enough time was allotted to turn compliant.

Experts have criticized the GDPR isn’t fully clear about a number of issues. The GDPR, on the other hand, feels enough advance time was given and hence erring companies should be fined and taken action against.

To the extent the GDPR places the rights of data back in the hands of the data subjects is one of the brightest features of the GDPR. The earlier companies become compliant the better it is for data privacy.

Photo credit freestocks.org on Unsplash

The post GDPR after 1 year: Where does data protection stand (with infographic) appeared first on Technology services news.


Seven questions of ethics facing Artificial Intelligence

Sundar Pichai, writing on Google blog, defined Artificial Intelligence (AI) this way: “AI is computer programming that learns and adapts.” He went on to further list out some amazing things one could do with AI, like sensors to predict wild fires, monitor cattle hygiene, diagnose diseases and more. Further he listed out 7 principles that, […] The post Seven questions of ethics facing Artificial Intelligence appeared first on Technology services...

Sundar Pichai, writing on Google blog, defined Artificial Intelligence (AI) this way: “AI is computer programming that learns and adapts.” He went on to further list out some amazing things one could do with AI, like sensors to predict wild fires, monitor cattle hygiene, diagnose diseases and more.

Further he listed out 7 principles that, according to Pichai, will guide the work that Google will carry out in the field of AI. These principles include sensitivity to people’s privacy and a commitment to upholding standards of scientific excellence.

Even while we stand at the cusp of a revolution that’s likely unprecedented in human history, there’s an elephant in the room a lot of people are looking away from: ethical issues.

What are the ethical dilemmas associated with AI?

Here is a short video on the ethical questions for AI:

Against the seven principles that Google claims it will follow, there are some serious questions that AI researchers, policymakers and people in general must face. These questions are based on universal ethics and can have far-reaching implications for virtually everything for the human race.

These questions on the ethics of AI involve a variety of things but always include the concept of individual liberty, the idea of a protective state and whether there’s a growing contradiction between the two. AI and ethics, therefore, span the entire spectrum of individualism vs collectivism.

It’s critical that we discuss and address questions on ethics of artificial intelligence because very soon it might be too late. Here are the top ethical questions in artificial intelligence:

1. How can we minimize or eliminate bias in the algorithm of AI?

Maybe the correct question to ask would have been “Can we?” rather than “How can we?”

To begin with artificial intelligence is created by humans who are prone to strong prejudices. After designing the algorithm, the machine is fed data to keep sharpening its ‘intelligence’.

The problem is if the data fed is racist by nature – showing more black criminals than white ones, for example – the machine will learn from the wrong kind of data. And its output will remain, at best, contentious.

The fact that most of the developments in AI comes from the private sector complicates matters. (China is an apparent exception because the Chinese government is very serious about AI, but that’s another story.) That’s because private sector isn’t that worried about being answerable to the general populace, unlike a government department which may be subject to intense scrutiny. This lack of transparency is worrying.

2. How secure will the AI be?

In some ways, this question is a derivative of the earlier one. Because developments in AI are mostly powered by the private sector, the degree of security could ultimately become a function of their business interests.

Businesses built in the digital economy haven’t always proven reliable when it comes to self-restraint and self-regulation. Facebook violating the data privacy of its users is often cited, correctly, as an example of how corporate greed soon overtakes enthusiastic corporate missions of being a good entity.

There are three questions of major significance one must address in light of security issues in AI.

One: What paradigms should corporates use to decide what levels of security are adequate?

Two: What is the optimal degree of policing the corporates?

Three: What checks and balances can corporates put in place to ensure the technology developed doesn’t end up with malicious actors?

Remember, in today’s world we’re talking about multinationals who may find two governments with conflicting requirements.

3. How can we stop AI from being deceptive?

Could AI be collecting information about you and keeping you in the dark?

In 2018, a story in The New York Times reported this: “Facebook empowered Apple to hide from Facebook users all indicators that its devices were asking for data. Apple devices also had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing, the records show.”

Earlier, in 2014, Google took over one of world’s most important AI lab DeepMind. Something similar to Google’s Project Dragonfly happened: there were deep concerns about the way Google could lead DeepMind into. DeepMind has created neural network that plays video games like humans do. It’s AlphaGo beat the Go world champion Lee Sedol in 2016 (The game Go is considered far more complex than chess).

An important announcement was made when Google took charge of DeepMind. An external review board was to be set up that would ensure the lab’s research does not end up in military applications. Today, no one knows whether the board even exists, let alone makes any decisions.

Both the examples from Facebook and Google tell a similar story of bent morals: commercial interests have frequently overtaken noble projects and there’s no reason to think AI will not end up the same way.

Without enough oversight and a tight policy framework, AI can be used deceptively.

4. What can we do to stop AI from being malicious?

One risk about is about AI falling into the wrong hands.

The other is even more grave: what if the AI is designed with malafide intentions to begin with?

Consider the self-flying selfie camera developed by Skydio. Using 13 cameras that do the visual tracking, the flying robot called R1 manages itself; at the launch, you’ve to tell it what person or object the R1 must follow (you “tell” R1 by a specially designed app that can sit in any smartphone).

That’s it.

One click on the app and the robot will figure out the rest. It will read the surroundings, decode the obstacles, lock its target and will begin following.

The dangerous part is, you don’t even have to buy it; you can rent it for just $40 a day. For the price of one pack of Parents’ Choice ‘best value’ diapers, you can spy on anyone, anywhere for one full day.

While emerging technology can be breathtakingly exciting, it’s a serious mistake to launch the product commercially without understanding all the risks involved.

5. How far can you trust something that’s largely unregulated?

Even though it’s made to sound that you can buy a gun in the US as easily as you can buy a can of Coke, it’s not entirely true. There’s some paperwork and background check involved in buying guns, as EuroNews wrote.

Buying Skydio camera (mentioned above) is just one of the many, many devices that use AI and you can buy easily like a commodity from the open market. No questions asked.

Ironically, it’s been employees of organizations that have opposed deals that could have put AI to military (autonomous weapons, for instance) use, not someone from outside the organizations.

For instance, employees of at least one company wrote an open letter to their CEO, questioning the stance, wisdom and policy to work with the military.

Protests are sometimes successful (Google moving out of a Pentagon project and from the Project Dragonfly). Sometimes they aren’t (Clarifai, the company to which the above open letter was directed is going ahead in its business with the US military).

In absence of detailed, strict and practical regulations, there’s no way of knowing what is cutting-edge and what is abominable.

6. Is there any way AI can be kept from being used for political vendetta?

China is using AI in some of the most innovative ways to bring in justice and stability. For instance, the 300 million cameras in China that track movement of people, enforce traffic discipline and deliver better, more efficient governance.

Critics are (rightly) wary of the way AI could be used by authoritarian governments like China.

What keeps single-party governments – like China’s – from using AI to silence the voices that are unpleasant to the government?

With the help of face recognition technologies, for instance, China is able to not only track the whereabouts of “notorious elements” but also make traveling and buying air-tickets extremely difficult for people who are on the government’s blacklist.

The use of AI to contain and effectively suppress political dissidents is one of the major risks emerging in China, but that’s not to suggest other countries (their ruling parties, to be more specific) are any way immune to the greed of abusing AI and unleash a witch-hunt.

7. What about the odd risk of “If we don’t, others surely will”?

There’s an equally strong and logical argument that it’s important to make sure companies within Europe and the US too don’t start misusing artificial intelligence for their monetary gains; after all, it’s not just China that must be controlled, right?

While this argument is perfectly rational, there’s a class of people who oppose the idea of excessively regulating the AI industry in Europe or the US.

This is their principal argument: When you hold back European or American companies by red-tape, China isn’t going to wait. So effectively, you run a potential risk of China overtaking every other country in artificial intelligence.

This is called the “If we don’t, others surely will” mentality and there’s definitely some water in it. A defiant China could actually jeopardize a lot many things when it controls a technology that’s effectively banned in other countries.

Conclusion

There’s no two opinion that ethical questions in artificial intelligence are far too many and far too compelling to be taken lightly. All emerging technologies come with associated risks and AI is no exception.

More dialogue, more openness and international cooperation are probably going to work best for AI. We can only hope that developments in AI do not outpace the political will to develop the correct regulations.

Featured image: Photo by Evan Dennis on Unsplash

The post Seven questions of ethics facing Artificial Intelligence appeared first on Technology services news.


From cars to supermarkets to apps: How much everyone knows about you

Like it or not, practically none of your data is really private anymore. Apparently, public is the new private. It’s no secret that any data that you voluntarily share on the internet will no longer be private, whether you like it or not, whether you explicitly permit or not. Data privacy, at least data that’s online, is almost a misnomer. But […] The post From cars to supermarkets to apps: How much everyone knows about you appeared first on Technology...

Like it or not, practically none of your data is really private anymore. Apparently, public is the new private.

It’s no secret that any data that you voluntarily share on the internet will no longer be private, whether you like it or not, whether you explicitly permit or not. Data privacy, at least data that’s online, is almost a misnomer.

But what about the data you haven’t expressly permitted anyone to share?

What do various business enterprises, data brokers and analytics agencies know about you?

Does your supermarket know if you’re cheating on your partner? Does your car know whether you’re on some subscription drug? Does your favorite app know what political party you’ll likely endorse? Does your cab aggregator know you’ve just been fired?

It looks like yes, they do.

Let’s begin with supermarkets, one of the oldest data capturing bodies.

What supermarkets know about you

Supermarkets have collected and analysed data since long. One of the most popular methods of harnessing customer data was loyalty programs and cards.

But the increase in computing power that Big Data brought suddenly began allowing systems to make sense out of reams and reams of paper full of almost unimaginable magnitude.

Here’s how supermarkets collect data about you and the way it’s used.

  • Sales: What you buy says truckloads about you. Based on the small changes in your sales patterns, supermarkets and big retailers can predict if you’re on a diet (obviously), expecting a baby, divorcing or getting married, switching jobs and a lot, lot more. This info is used to figure out not only how to better lay out products so that you don’t miss them but also to learn what offers you’ll find irresistible.
  • Browsing: Did you stop by the organic cereal rack? The CCTV could pick up subtle cues of how long customers stop where and what do they ultimately end up buying. Smart visual recognition systems will ultimately decipher what combos did you browse before finally making a decision.
  • Loyalty: How likely are you to be swayed a competing retailer’s offer? The change in your buying patterns or quantities could be a signal, but there’s a lot more. For instance, a supermarket will want to convince you that it stores everything you ever need. To do that, it will mine, buy and analyze everything it can learn about you – and that’s the way to build loyalty in the data-driven economy.

In this article, Charles Duhigg, the author of The Power of Habit: Why We Do What We Do in Life and Business, explains how data analysis can produce unbelievable results.

For instance, the retail giant Target can figure out if a customer is pregnant, irrespective of whether the customer wishes to disclose this information or not.

What your car knows about you

Much as you’d like to drive a ‘connected’ car, there’s a lot of data the car collects about you.

Not surprisingly, you don’t know what all the car knows about you, where’s the data sent to, how it is processed and used and whether you can do anything about it.

Yes, this data can be very useful for stuff like service reminders, vehicle usage patterns and other things that can make your driving safer and more pleasurable. That said, there’s a good deal of information that seeps out without you know it, including the information you may be reluctant to share otherwise.

The Zebra created a neat infographic about what you car knows about you. Here’s some of the things your car knows about you:

  • Your phones and text messages: Does your car system read out the text messages you receive? It likely stores data from that.
  • Your home – or permanent location: Do you drive to a certain place regularly after work? Your car interprets that location as your home.
  • Your driving skills: Do you brake too often? Are you wearing seatbelts? How about you checking your phone while driving? Do you overspeed often? Depending upon the make and the model of your car, some or all of this – and more – is tracked and stored. Car black boxes, remember?

Your fitness app could be a felon too – unknowingly

When relatively low-tech areas like cars and supermarkets can collect so much data about you, how can you expect smartphone apps to not probe further and practically know you inside out?

In the remote likelihood that you might have forgotten what happened with the fitness app Strava, here’s a quick recap:

Strava, like most other fitness apps, encourages users to record their activities and let the app access its location.

Why?

If I live in downtown Bronx and I see someone in my neighborhood has jogged 3 miles today, it rubs my ego the wrong way and makes me run 3.5 miles. Good intentions, basically.

Unfortunately, it led to what is termed a security threat. A number of US military personnel are Strava users too. When they let the app access their locations, they inadvertently disclosed where they themselves were stationed. That also exposed where American military was located currently and also their supply and logistics routes.

Not exactly something you’d like to be proud of, right?

If you’re wondering why military personnel divulged the details, here’s one of the many likely explanations:

Apps like Strava make people pore closely over every single calorie they burnt, every step they walked, every yard they pedaled.

Exercising produces the opioid hormone called endorphin. Such hormones lead to a feeling of euphoria (remember the aha feeling after a round of sit-ups?).

This euphoria may be the culprit in making disciplined military personnel share their locations on Strava.

Privacy Policies of apps

Apps reminds you of privacy policies, right? The ones that must “I agree” before you can use the app.

Well, they could do with a bit of simpler wording. A post rightly observed how Privacy Policies, written in almost unreadable legalese, are nearly impossible to read.

Here’s three things you must understand about Privacy Policy (they stand out like a sore thumb):

  1. Level: The post referred to above found that the average grade level (of language used) of a Privacy Policy is 11.5 grade level. Not exactly simple stuff, if you recall Americans read at an average of 8th grade.
  2. Size: Even at 250 words per minute, you’d need over 17 minutes to just read Facebook’s Privacy Policy. This time doesn’t account for the time you’d need to comprehend whatever is written. Now recall all the sites and all the apps you use and ask yourself if you actually read – and understand – those Privacy Policies. It’s pretty much sure you don’t read them.
  3. Variety: Nearly 6 million apps are estimated to be on Google’s Play Store and Apple’s App Store. Some of these apps are crafted by one and two-person teams, while some others are crafted by large corporates and conglomerates. They bring different levels of commitments and understanding of Privacy Policy.

Conclusion

From a pessimistic point of view, there’s not going to to be any data privacy if you use any online tools. At least not the way it was back in the 20th century.

Data is the new currency with which you pay for the usage of some apps. It doesn’t matter if the app is free (Facebook app) or paid (Procreate or Pocket Casts) – your data will always be at risk.

For instance, it’s clear that Facebook knows a lot more about you than you’d every believe. Not only that, Facebook may be sharing your data with others secretly.

Apparently, there was never a better time to use the old parting words:

Take care.

The post From cars to supermarkets to apps: How much everyone knows about you appeared first on Technology services news.


Link to Category: News & Media Blogs

Or if you prefer use one of our linkware images? Click here

Social Bookmarks


Available Upgrade

If you are the owner of Almostism, or someone who enjoys this blog why not upgrade it to a Featured Listing or Permanent Listing?


Use Weebly? Than submit your blog for free to our blog directory.