Do you offer blog hosting, wordpress hosting, wordpress plugins or blog templates? Than advertise in this spot to get visitors to your services and products! Advertising in a blog platform category lets you target users looking or already using software that you may provide! So purchase an ad space today before there all gone!
notice: Total Ad Spaces Available: (8) ad spaces remaining of (8)
Free weblog publishing tool from Google, for sharing text, photos and video and more.
Do you have a blog using Blogger Blogspot as your blog platform, then be the first to submit your blog to the free blog directory. Submit blog!
How many blogers using Blogger Blogspot 1,415
Translation services make it easier to communicate with someone who doesn’t speak the same language, whether you’re traveling abroad or living in a new country. But in the context of a global pandemic, government and health officials urgently need to deliver vital information to their communities, and every member of the community needs access to information in a language they understand. In the U.S. alone, that means reaching 51 million migrants in at least 350 languages, with information ranging from how to keep people and their families safe, to financial, employment or food resources.
To better understand the challenges in addressing these translation needs, we conducted a research study, and interviewed health and government officials responsible for disseminating critical information. We assessed the current shortcomings in providing this information in the relevant languages, and how translation tools could help mitigate them.
When organizations—from health departments to government agencies—update information on a website, it needs to be quickly accessible in a wide variety of languages. We learned that these organizations are struggling to keep up with the high volume of rapidly-changing content and lack the resources to translate this content into the needed languages.
Officials, who are already spread thin, can barely keep up with the many updates surrounding COVID-19—from the evolving scientific understanding, to daily policy amendments, to new resources for the public. Nearly all new information is coming in as PDFs several times a day, and many officials report not being able to offer professional translation for all needed languages. This is where machine translation can serve as a useful tool.
Machine translation is an automated way to translate text or speech from one language to another. It can take volumes of data and provide translations into a large number of supported languages. Although not intended to fully replace human translators, it can provide value when immediate translations are needed for a wide variety of languages.
If you're looking to translate content on the web, you have several options.
Use your browser
Many popular browsers offer translation capabilities, which are either built in (e.g. Chrome) or require installing an add-on or extension (e.g. Microsoft Edge or Firefox). To translate web content in Chrome, all you have to do is go to a webpage in another language, then click “Translate” at the top.
Use a website translation widget
If you are a webmaster of a government, non-profit, and/or non-commercial website (e.g. academic institutions), you may be eligible to sign up for the Google Translate Website Translator widget. This tool translates web page content into 100+ different languages. To find out more, please visit the webmasters blog.
Upload PDFs and documents
Google Translate supports translating many different document formats (.doc, .docx, .odf, .pdf, .ppt, .pptx, .ps, .rtf, .txt, .xls, .xlsx). By simply uploading the document, you can get a translated version in the language that you choose.
Millions of people need translations of resources at this time. Google’s researchers, designers and product developers are listening. We are continuously looking for ways to improve our products and come to people’s aid as we navigate the pandemic.
Google Analytics helps you measure the actions people take across your app and website. By applying Google’s machine learning models, Analytics can analyze your data and predict future actions people may take. Today we are introducing two new predictive metrics to App + Web properties. The first is Purchase Probability, which predicts the likelihood that users who have visited your app or site will purchase in the next seven days. And the second, Churn Probability, predicts how likely it is that recently active users will not visit your app or site in the next seven days. You can use these metrics to help drive growth for your business by reaching the people most likely to purchase and retaining the people who might not return to your app or site via Google Ads.
Analytics will now suggest new predictive audiences that you can create in the Audience Builder. For example, using Purchase Probability, we will suggest the audience “Likely 7-day purchasers” which includes users who are most likely to purchase in the next seven days. Or using Churn Probability, we will suggest the audience “Likely 7-day churning users” which includes active users who are not likely to visit your site or app in the next seven days.
In the past, if you wanted to reach people most likely to purchase, you’d probably build an audience of people who had added products to their shopping carts but didn’t purchase. However, with this approach you might miss reaching people who never selected an item but are likely to purchase in the future. Predictive audiences automatically determine which customer actions on your app or site might lead to a purchase—helping you find more people who are likely to convert at scale.
Imagine you run a home improvement store and are trying to drive more digital sales this month. Analytics will now suggest an audience that includes everyone who is likely to purchase in the next seven days—on either your app or your site—and then you can reach them with a personalized message using Google Ads.
Or let’s say you’re an online publisher and want to maintain your average number of daily users. You can build an audience of users who are likely to not visit your app or site in the next seven days and then create a Google Ads campaign to encourage them to read one of your popular articles.
In addition to building audiences, you can also use predictive metrics to analyze your data with the Analysis module. For example, you can use the User Lifetime technique to identify which marketing campaign helped you acquire users with the highest Purchase Probability. With that information you may decide to reallocate more of your marketing budget towards that high potential campaign.
You will soon be able to use predictive metrics in the App + Web properties beta to build audiences and help you determine how to optimize your marketing budget. In the coming weeks these metrics will become available in properties that have purchase events implemented or are automatically measuring in-app purchases once certain thresholds are met.
If you haven't yet created an App + Web property, you can get started here. We recommend continuing to use your existing Analytics properties alongside an App + Web property.
AI is a powerful tool that will have a significant impact on society for many years to come, from improving sustainability around the globe to advancing the accuracy of disease screenings. As a leader in AI, we’ve always prioritized the importance of understanding its societal implications and developing it in a way that gets it right for everyone.
That’s why we first published our AI Principles two years ago and why we continue to provide regular updates on our work. As our CEO Sundar Pichai said in January, developing AI responsibly and with social benefit in mind can help avoid significant challenges and increase the potential to improve billions of lives.
The world has changed a lot since January, and in many ways our Principles have become even more important to the work of our researchers and product teams. As we develop AI we are committed to testing safety, measuring social benefits, and building strong privacy protections into products. Our Principles give us a clear framework for the kinds of AI applications we will not design or deploy, like those that violate human rights or enable surveillance that violates international norms. For example, we were the first major company to have decided, several years ago, not to make general-purpose facial recognition commercially available.
Over the last 12 months, we’ve shared our point of view on how to develop AI responsibly—see our 2019 annual report and our recent submission to the European Commission’s Consultation on Artificial Intelligence. This year, we’ve also expanded our internal education programs, applied our principles to our tools and research, continued to refine our comprehensive review process, and engaged with external stakeholders around the world, while identifying emerging trends and patterns in AI.
Our researchers are working on computer science and technology not just for today, but for tomorrow as well. They continue to play a leading role in the field, publishing more than 200 academic papers and articles in the last year on new methods for putting our principles into practice. These publications address technical approaches to fairness, safety, privacy, and accountability to people, including effective techniques for improving fairness in machine learning at scale, a method for incorporating ethical principles into a machine-learned model, and design principles for interpretable machine learning systems.
Over the last year, a team of Google researchers and collaborators published an academic paper proposing a framework called Model Cards that’s similar to a food nutrition label and designed to report an AI model’s intent of use, and its performance for people from a variety of backgrounds. We’ve applied this research by releasing Model Cards for Face Detection and Object Detection models used in Google Cloud’s Vision API product.
Our goal is for Google to be a helpful partner not only to researchers and developers who are building AI applications, but also to the billions of people who use them in everyday products. We’ve gone a step further, releasing 14 new tools that help explain how responsible AI works, from simple data visualizations on algorithmic bias for general audiences to Explainable AIdashboards and tool suites for enterprise users. You’ll find a number of these within our new Responsible AI with TensorFlow toolkit.
As we’ve shared previously, Google has a central, dedicated team that reviews proposals for AI research and applications for alignment with our principles. Operationalizing the AI Principles is challenging work. Our review process is iterative, and we continue to refine and improve our assessments as advanced technologies emerge and evolve. The team also consults with internal domain experts in machine-learning fairness, security, privacy, human rights, and other areas.
Whenever relevant, we conduct additional expert human rights assessments of new products in our review process, before launch. For example, we enlisted the nonprofit organization BSR (Business for Social Responsibility) to conduct a formal human rights assessment of the new Celebrity Recognition tool, offered within Google Cloud Vision and Video Intelligence products. BSR applied the UN’s Guiding Principles on Business and Human Rights as a framework to guide the product team to consider the product’s implications across people’s privacy and freedom of expression, as well as potential harms that could result, such as discrimination. This assessment informed not only the product’s design, but also the policies around its use.
In addition, because any robust evaluation of AI needs to consider not just technical methods but also social context(s), we consult a wider spectrum of perspectives to inform our AI review process, including social scientists and Google’s employee resource groups.
As one example, consider how we’ve built upon learnings from a case we published in our last AI Principles update: the review of academic research on text-to-speech (TTS) technology. Since then, we have applied what we learned in that earlier review to establish a Google-wide approach to TTS. Google Cloud’s Text-to-Speech service, used in products such as Google Lens, puts this approach into practice.
Because TTS could be used across a variety of products, a group of senior Google technical and business leads were consulted. They considered the proposal against our AI Principles of being socially beneficial and accountable to people, as well as the need to incorporate privacy by design and avoiding technologies that cause or are likely to cause overall harm.
Reviewers identified the benefits of an improved user interface for various products, and significant accessibility benefits for people with hearing impairments.
They considered the risks of voice mimicry and impersonation, media manipulation, and defamation.
They took into account how an AI model is used, and recognized the importance of adding layers of barriers for potential bad actors, to make harmful outcomes less likely.
They recommended on-device privacy and security precautions that serve as barriers to misuse, reducing the risk of overall harm from use of TTS technology for nefarious purposes.
The reviewers recommended approving TTS technology for use in our products, but only with user consent and on-device privacy and security measures.
They did not approve open-sourcing of TTS models, due to the risk that someone might misuse them to build harmful deepfakes and distribute misinformation.
To increase the number and variety of outside perspectives, this year we launched the Equitable AI Research Roundtable, which brings together advocates for communities of people who are currently underrepresented in the technology industry, and who are most likely to be impacted by the consequences of AI and advanced technology. This group of community-based, non-profit leaders and academics meet with us quarterly to discuss AI ethics issues, and learnings from these discussions help shape operational efforts and decision-making frameworks.
Our global efforts this year included new programs to support non-technical audiences in their understanding of, and participation in, the creation of responsible AI systems, whether they are policymakers, first-time ML (machine learning) practitioners or domain experts. These included:
Partnering with Yielding Accomplished African Women to implement the first-ever Women in Machine Learning Conference in Africa. We built a network of 1,250 female machine learning engineers from six different African countries. Using the Google Cloud Platform, we trained and certified 100 women at the conference in Accra, Ghana. More than 30 universities and 50 companies and organizations were represented. The conference schedule included workshops on Qwiklabs, AutoML, TensorFlow, human-centered approach to AI, mindfulness and #IamRemarkable.
Releasing, in partnership with the Ministry of Public Health in Thailand, the first studyof its kind on how researchers apply nurses' and patients' input to make recommendations on future AI applications, based on how nurses deployed a new AI system to screen patients for diabetic retinopathy.
Launching an ML workshop for policymakers featuring content and case studies covering the topics of Explainability, Fairness, Privacy, and Security. We’ve run this workshop, via Google Meet, with over 80 participants in the policy space with more workshops planned for the remainder of the year.
Hosting the PAIR (People + AI Research) Symposium in London, which focused on participatory ML and marked PAIR’s expansion to the EMEA region. The event drew 160 attendees across academia, industry, engineering, and design, and featured cross-disciplinary discussions on human-centered AI and hands-on demos of ML Fairness and interpretability tools.
We remain committed to external, cross-stakeholder collaboration. We continue to serve on the board and as a member of the Partnership on AI, a multi-stakeholder organization that studies and formulates best practices on AI technologies. As an example of our work together, the Partnership on AI is developing best practices that draw from our Model Cards proposal as a framework for accountability among its member organizations.
We know no system, whether human or AI powered, will ever be perfect, so we don’t consider the task of improving it to ever be finished. We continue to identify emerging trends and challenges that surface in our AI Principles reviews. These prompt us to ask questions such as when and how to responsibly develop synthetic media, keep humans in an appropriate loop of AI decisions, launch products with strong fairness metrics, deploy affective technologies, and offer explanations on how AI works, within products themselves.
As Sundar wrote in January, it’s crucial that companies like ours not only build promising new technologies, but also harness them for good—and make them available for everyone. This is why we believe regulation can offer helpful guidelines for AI innovation, and why we share our principled approach to applying AI. As we continue to responsibly develop and use AI to benefit people and society, we look forward to continuing to update you on specific actions we’re taking, and on our progress.
We’ve always said that if Google Maps is about finding your way, Google Earth is about getting lost. With Google Earth, you can see our planet like an astronaut from space, then travel anywhere on it in seconds with a click or tap. Even after an entire afternoon exploring cities, landscapes and stories on Google Earth, you'll have barely scratched the surface.
Now 15 years old, Google Earth is still the world’s biggest publicly accessible repository of geographic imagery. It combines aerial photography, satellite imagery, 3D topography, geographic data, and Street View into a tapestry you can explore. But Google Earth is much more than a 3D digital globe. The underlying technology has democratized mapmaking allowing anyone to better understand our world, and take action to create positive change.
Of the billions of people who have used Google Earth over the years, here are 15 stories that have inspired us:
1. Responding to natural disasters. Two months after Google Earth launched, we quickly realized that people were not just using it to plan their vacations. Hurricane Katrina hit the Gulf Coast in August 2005, and the Google Earth team quickly worked with the National Oceanic and Atmospheric Administration (NOAA) to make updated imagery available to first responders on the ground to support rescue efforts, relief operations and understand the hurricane’s impact.
2. Taking virtual field trips. In 2006, former English teacher, Jerome Burg, first used Google Earth to create Lit Trips, tours that follow the journeys of literature’s well-known characters. Today the project includes more than 80 Lit Trips for teachers and students of all grade levels. Each tour includes thought-provoking discussion starters, classroom resources and enrichment activities.
3. Protecting culture. When Chief Almir of the Suruí people first glimpsed Google Earth on a visit to an Internet cafe, the indigenous leader immediately grasped its potential as a tool for conserving his people’s traditions. In 2007, Chief Almir traveled thousands of miles from the Brazilian Amazon to Google headquarters to invite Google to train his community to use Google Earth. The Suruí people went on to build their Cultural Map on Google Earth which included hundreds of cultural sites of significance in their rainforest.
4. Decoding animal behaviors. In 2008, German and Czech researchers used Google Earth to look at 8,510 domestic cattle in 308 pastures across six continents. The images led them to make the amazing discovery that certain species of cattle and deer align themselves to the magnetic poles while grazing or resting.
5. Reuniting families. Saroo Brierley was accidentally separated from his family at the age of five and ended up in an orphanage. Luckily, Saroo was adopted by a loving family in Australia. As an adult, Saroo was curious about his origins and painstakingly traced his way back home to India using the satellite imagery in Google Earth. He was able to reunite with his biological mother in 2011 after 25 years apart. View the story in Google Earth.
6. Helping communities impacted by war. The HALO Trust—the world's oldest, largest and most successful humanitarian landmine clearance agency—uses Google Earth to identify and map mined areas. The HALO Trust has cleared 1.8 million landmines, 11.9 million items of other explosive remnants of war and 57.2 million items of small arms munitions in 26 countries and territories around the world.
7. Protecting elephants from poachers:To protect elephants from poachers seeking their ivory tusks, Save the Elephants built an elephant tracking system. Starting in 2009, they have outfitted hundreds of elephants with satellite collars to track their movements in real time on Google Earth. Their partner organizations, including rangers at the Lewa Wildlife Conservancy, use Google Earth in the fight against elephant poachers across the conservancy and privately owned rangelands in Kenya.
8. Discovering unknown forests. Dr. Julian Bayliss used Google Earth to explore high-altitude rainforests in Africa. For almost as long as Google Earth has existed, Dr. Bayliss has been systematically flying over northern Mozambique in Google Earth and scanning the satellite imagery. One day he came across what appeared to be a mountaintop rainforest. His virtual discovery set off a chain of events that led to the discovery of an untouched rainforest ecosystem atop Mount Lico in 2018.
9. Supporting students in rural classrooms. Padmaja Sathyamoorthy and others who work at the India Literacy Project (ILP) use Google Earth to build interactive content for rural classrooms, helping improve literacy for 745,000 students across India. Padmaja says, “ILP has made history and geography come alive with new tools and media content that capture the imagination of young minds. The project expands students’ horizons. It’s not just about learning curriculum from a textbook. I believe it creates a curiosity and a love for learning that will last a lifetime.”
10. Inspiring positive environmental change. The nonprofit organization, HAkA, used Google Earth to show threats to the Leuser Ecosystem, the last place on Earth where orangutans, rhinos, elephants and tigers coexist in the wild. This Google Earth tour helped raise awareness about the region and incited positive changes in the area.
12. Celebrating global language diversity. In 2019, Tania Haerekiterā Tapueluelu Wolfgramm, a Māori and Tongan woman traveled across the Pacific ocean to interview and record the speakers of 10 different Indigenous languages for Google Earth. The project featured 50 Indigenous language speakers from around the world in honor of the 2019 International Year of Indigenous Languages.
13. Catching (fictional) super thieves. People around the world followed the trail of Carmen Sandiego and the V.I.L.E. operatives by solving the three capers launched in Google Earth in 2019.
14. Telling more compelling news stories. Journalists have long used the rich imagery in Google Earth to create more engaging stories. Vox Video used Google Earth Studio to tell the story of how the Event Horizon telescope collected 54-million-year-old photons to take the first ever picture of a black hole.
15. Homecoming during COVID-19. During Golden Week in Japan, most people visit their hometowns, but this year that wasn’t possible due to COVID-19. To help homesick natives, a group from Morioka city developed a tour in Google Earth that let people virtually take the bullet train to Morioka station and visit beloved locations in the city.
A big thank you to everyone for being with us on this journey. Our hope is that Google Earth will continue to inspire curiosity and move us to care more deeply about our beautiful planet and all who live here. We look forward to seeing what the next 15 years brings!
Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party.
A few years ago, I learned that a translation from Finnish to English using Google Translate led to an unexpected outcome. The sentence “hän on lentäjä” became “he is a pilot” in English, even though “hän” is a gender-neutral word in Finnish. Why did Translate assume it was “he” as the default?
As I started looking into it, I became aware that just like humans, machines are affected by society’s biases. The machine learning model for Translate relied on training data, which consisted of the input from hundreds of millions of already-translated examples from the web. “He” was more associated with some professions than “she” was, and vice versa.
Now, Google provides options for both feminine and masculine translations when adapting gender-neutral words in several languages, and there’s a continued effort to roll it out more broadly. But it’s still a good example of how machine learning can reflect the biases we see all around us. Thankfully, there are teams at Google dedicated to finding human-centered solutions to making technology inclusive for everyone. I sat down with Been Kim, a Google researcher working on the People + AI Research (PAIR) team, who devotes her time to making sure artificial intelligence puts people, not machines, at its center, and helping others understand the full spectrum of human interaction with machine intelligence. We talked about how you make machine learning models easy to interpret and understand, and why it’s important for everybody to have a basic idea of how the technology works.
Why is this field of work so important?
Machine learning is such a powerful tool, and because of that, you want to make sure you’re using it responsibly. Let’s take an electric machine saw as an example. It’s a super powerful tool, but you need to learn how to use it in order not to cut your fingers. Once you learn, it’s so useful and efficient that you’ll never want to go back to using a hand saw. And the same goes for machine learning. We want to help you understand and use machine learning correctly, fairly and safely.
Since machine learning is used in our everyday lives, it’s also important for everyone to understand how it impacts us. No matter whether you’re a coffee shop owner using machine learning to optimize the purchase of your beans based on seasonal trends, or your doctor diagnoses you with a disease with the help of this technology, it’s often crucial to understand why a machine learning model has produced the outcome it has. It’s also important for developers and decision-makers to be able to explain or present a machine learning model to people in order to do so. This is what we call “interpretability.”
How do you make machine learning models easier to understand and interpret?
There are many different ways to make an ML model easier to understand. One way is to make the model reflect how humans think from the start, and have the model "trained" to provide explanations along with predictions, meaning when it gives you an outcome, it also has to explain how it got there.
Another way is to try and explain a model after the training on data is done. This is something you can do when the model has been built to use input to provide an output from its own perspective, optimizing for prediction, without a clear “how” included. This means you're able to plug things into it and see what comes out, and that can give you some insight into how the model generally makes decisions, but you don't necessarily know exactly how specific inputs are interpreted by the model in specific cases.
One way to try and explain models after they’ve been trained is using low level features or high level concepts. Let me give you an example of what this means. Imagine a system that classifies pictures: you give it a picture and it says, “This is a cat.” A low level feature is when I then ask the machine which pixels mattered for that prediction, it can tell us if it was one pixel or the other, and we might be able to see that the pixels in question show the cat’s whiskers. But we might also see that it is a scattering of pixels that don’t appear meaningful to the human eye, or that it’s made the wrong interpretation. High level concepts are more similar to the way humans communicate with one another. Instead of asking about pixels, I’d ask, “Did the whiskers matter for the prediction? or the paws?” and again, the machine can show me what imagery led it to reach this conclusion. Based on the outcome, I can understand the model better. (Together with researchers from Stanford, we’ve published papers that go into further detail on this for those who are interested.)
Can machines understand some things that we humans can’t?
Yes! This is an area that I am very interested in myself. I am currently working on a way to showcase how technology can help humans learn new things. Machine learning technology is better at some things than we are; for example it can analyze and interpret data at a much larger scale than humans can. Leveraging this technology, I believe we can enlighten human scientists with knowledge they haven't previously been aware of.
What do you need to be careful of when you’re making conclusions based on machine learning models?
First of all, we have to be careful that human bias doesn't come into play. Humans carry biases that we simply cannot help and are often unaware of, so if an explanation is up to a human’s interpretation, and often it is, then we have a problem. Humans read what they want to read. Now, this doesn’t mean that you should remove humans from the loop. Humans communicate with machines, and vice versa. Machines need to communicate their outcomes in the form of a clear statement using quantitative data, not one that is vague and completely open for interpretation. If the latter happens, then the machine hasn’t done a very good job and the human isn’t able to provide good feedback to the machine. It could also be that the outcome simply lacks additional context only the human can provide, or that it could benefit from having caveats, in order for them to make an informed judgement about the results of the model.
What are some of the main challenges of this work?
Well, one of the challenges for computer scientists in this field is dealing with non mathematical objectives, which are things you might want to optimize for, but don’t have an equation for. You can’t always define what is good for humans using math. That requires us to test and evaluate methods with rigor, and have a table full of different people to discuss the outcome. Another thing has to do with complexity. Humans are so complex that we have a whole field of work - psychology - to study this. So in my work, we don't just have computational challenges, but also complex humans that we have to consider. Value-based questions such as “what defines fairness?” are even harder. They require interdisciplinary collaboration, and a diverse group of people in the room to discuss each individual matter.
What's the most exciting part?
I think interpretability research and methods are making a huge impact. Machine learning technology is a powerful tool that will transform society as we know it, and helping others to use it safely is very rewarding.
On a more personal note, I come from South Korea and grew up in circumstances where I feel I didn’t have too many opportunities. I was incredibly lucky to get a scholarship to MIT and come to the U.S. When I think about the people who haven't had these opportunities to be educated in science or machine learning, and knowing that this machine learning technology can really help and be useful to them in their everyday lives if they use it safely, I feel really motivated to be working on democratizing this technology. There's many ways to do it, and interpretability is one of the things that I can contribute with.
The most stable fast but dirt cheap captcha recognition provider online!
Time and Date Calculator.A useful time and date calculator. Calculate how many days between dates. Use our free timezone calculator to find any dates.
A free to use stopwatch and timer with laps and notifications.
Use our free tools to calculate the percentage change and sales tax calculator