AdobeStock_91734599

AI & Machine Learning News. 19, November 2018

 

Tech C.E.O.s Are in Love With Their Principal Doomsayer : The futurist philosopher Yuval Noah Harari thinks Silicon Valley is an engine of dystopian ruin. So why do the digital elite adore him so?

The futurist philosopher Yuval Noah Harari worries about a lot. He worries that Silicon Valley is undermining democracy and ushering in a dystopian hellscape in which voting is obsolete. He worries that by creating powerful influence machines to control billions of minds, the big tech companies are destroying the idea of a sovereign individual with free will. He worries that because the technological revolution’s work requires so few laborers, Silicon Valley is creating a tiny ruling class and a teeming, furious “useless class.”

2018-11-09 Read the full story.

CloudQuant Thoughts… Reed Hastings (Netflix CEO) wrote of Yuval Harari : “Yuval’s the anti-Silicon Valley persona — he doesn’t carry a phone and he spends a lot of time contemplating while off the grid. We see in him who we wish we were.” He added “His thinking on A.I. and biotech in his new book pushes our understanding of the dramas to unfold.”. I wanted to highlight this great article from a couple of weeks ago. Well worth a read.

 

Andrew Ng Launches ‘AI for Everyone,’ a new Coursera Program for Business Professionals

Andrew Ng, a computer scientist who led Google’s AI division, Google Brain, and formerly served as vice president and chief scientist at Baidu, is a veritable celebrity in the artificial intelligence (AI) industry. After leaving Baidu, he debuted an online curriculum of classes centered around machine learning — Deeplearning.ai — and soon after launched Landing.ai, a startup with the goal of “revitalizing manufacturing through AI.” (One of its first partners was Taiwanese company Foxconn, which produces the bulk of Apple’s iPhones.) Ng was the keynote speaker at the AI Frontiers Conference in November 2017, and this year unveiled the AI Fund, a $175 million incubator that backs small teams of experts looking to solve key problems using machine learning. Oh, and he’s also chairman of AI cognitive behavioral therapy startup Woebot; sits on the board of driverless car company Drive.ai; and wrote a guide to AI (“Machine Learning Yearning”) which he distributed for free. Yet somehow, he found time to put together a new online training course — “AI for Everyone” — that seeks to demystify AI for business executives.

Ng has always had a passion for education. At Stanford, where he previously served as director of the Stanford Artificial Intelligence Lab, he started the Stanford Engineering Everywhere (SEE), a compendium of freely available online courses. It served as the foundation for Coursera, an online learning platform Ng cofounded with Daphne Koller in 2012. As of June 2018, Coursera had more than 33 million registered users enrolled in more than 2,400 courses.

Ng’s new course is available through Coursera (appropriately) for $49 per month, and launches in early 2019. And while it’s open to anyone, it’s principally geared toward business professionals who want to “better understand AI” and how it can impact their business — that is to say, executives interested in learning to select AI projects that’ll yield a return.
2018-11-16 15:30:06+00:00 Read the full story.

CloudQuant Thoughts… A new Ng course, and one aimed not at the developers but at those who need to understand how this “radically new future” works is a great development.

 

AXA Investment Manager Highlights Economic Upside of ESG (Environmental, Social and Governance)

Kathryn McDonald became head of sustainable investing at AXA Investment Managers’ Rosenberg Equities in May last year in order to ensure that environmental, social and governance considerations were integrated into all of the unit’s equities portfolios by the end of 2017. She had previously been director of investment strategy at AXA IM Rosenberg Equities since 2014, having joined the group in 1999. AXA IM is also integrating ESG analysis across its investment platform and in September the group made a raft of hires to strengthen its responsible investment capabilities. McDonald told Markets Media that fund managers can make an economic case to integrate ESG, as well as helping to slow global warming.

“Our argument is economic, and is as focused on upside as downside,” she said. “Some ESG metrics point us toward opportunities for jobs and GDP growth while also helping mitigate climate change.” AXA IM Rosenberg Equities has been managing portfolios that take ESG factors into account since the mid-1990s. At that time asset managers focussed on exclusions, such as not investing in weapons manufacturers, but the market has become more sophisticated in monitoring ESG factors.
2018-11-16 19:05:37+00:00 Read the full story.

CloudQuant Thoughts… The use of Sentiment data to improve Auto-Trading Models is now common practice, I am hearing ESG being touted as the new cutting “Edge” for Trading.

 

AI Needs To Be Easier, But How?

Companies today are scrambling to take advantage of the rapid evolution of artificial intelligence technologies, such as deep learning. They’re driven in part by fear of being left behind, and hopes of getting ahead of competitors. While AI is moving quickly, there are still substantial barriers to implementation, which provides incentive for the data science community to make AI simpler. There is still more talk than action on AI. According to a PwC report issued this week, 53% of firms say that they are planning their investment and use cases in AI. Less than 20% say they have at least one use case and a plan, but only 4% have said they’ve successfully implemented the technology. Even worse, only 3% say they’ve implemented and are measuring ROI.

Those percentages jibe with the failure rate of big data projects, which Gartner analyst Nick Heudecker recently pegged at 85%. The most successful big data practitioners continue to be the Web giants like Google, Facebook, Twitter, Amazon, and Netflix, which developed many of the core technologies enabling the AI revolution, as well as the Fortune 500 firms that have millions to invest. Businesses face several major obstacles to succeeding with AI, including a mix of technical, architectural, and personnel-oriented challenges. The general pattern of utilizing machine learning (ML) and deep learning (DL) are fairly well-established at this point, but businesses still struggle with the basics…
2018-11-14 00:00:00 Read the full story.

CloudQuant Thoughts… Even worse, only 3% say they’ve implemented and are measuring ROI. If this stat is accurate then there is a lot of money being spent without a lot sausage being produced. Not good.

 

Hive taps a workforce of 700,000 people to label data and train AI models

Datasets are the lifeblood of artificial intelligence (AI) — they’re what make models tick, so to speak. But data without corresponding annotations is, depending on the type of algorithm at play (i.e., supervised versus unsupervised), more or less useless. That’s why sample-labeling startups like Scale have raised tens of millions of dollars and attracted clients like Uber and General Motors. And it’s why Kevin Guo and Dmitriy Karpman cofounded Hive, a startup that uses annotated data supplied by hundreds of thousands of volunteers to train domain-specific AI models.

Hive, which employs nearly 100 people, launched its flagship trio of products — Hive Data, Hive Predict, and Hive Enterprise — shortly before raising over $30 million in venture capital from PayPal founder Peter Thiel’s Founders Fund and others. “We built [Hive] because we felt that while there’s a lot of excitement around AI and deep learning, we didn’t see many practical applications being built,” Guo told VentureBeat in a phone interview. “There’s a lot of hype, but didn’t seem obvious what problems they’re really going to solve. Most of these things were demos that were somewhat working, but weren’t really enterprise-grade.”

Toward that end, Hive recruits the bulk of its human data labelers through Hive Work, a smartphone app and website that instructs them to complete tasks like classifying images and transcribing audio. In exchange, Hive doles out a small reward — tens of thousands of dollars a week. (Guo says it can use “surge pricing” to ensure faster turnaround times when necessary, like when a Hive customer has a specific project.)
2018-11-16 00:00:00 Read the full story.

Facebook to set up ‘Supreme Court’ within a year to oversee appeals about content decisions

Facebook is planning to set up a Supreme Court-style panel to oversee user appeals of content removal decisions. The move comes just hours after a damning New York Times report accused the company of distracting attention from recent scandals by spreading “vile propaganda” about rivals.

In an attempt to limit the fallout from the report, chief executive Mark Zuckerberg posted a 4,500-word “blueprint” on the future of content moderation on Facebook. Until now, the site has relied on artificial intelligence to detect content like terrorist uploads and pornography, which can be taken down almost immediately. It also has as a legion of human moderators.
2018-11-16 00:00:00 Read the full story.

CloudQuant Thoughts… It also has a legion of human moderators. We are hearing reports of PTSD from third world “sifters” of our online media. At best they are working at providing training data for machines that will take their new found wealth away. One can only hope that these “new tech” workers are able to create something of their own with more longevity!

 

The Next Leap: How A.I. will change the 3D industry

CloudQuant Thoughts… This is a very entertaining presentation as a view of how AI may be affecting an industry that you have not considered. But I particularly liked the section I have linked to as it is something we have covered before, the Nvidia face maker. But Andrew’s slide shows the Input Images together with the images created by the GAN. Neat!

 

Machine Learning’s Big Role in Population-Level Genetic Study

A large-scale genetic project is currently underway in Nevada that’s using advanced analytics and machine learning to identify connections between people’s genes and their health. It’s the first project to study genetic data at a population level, and it could be a model for a national program.The Healthy Nevada project began in September 2016, when all residents of northern Nevada were invited to take a genetic test, at no charge to them. Much to the surprise of the project leaders, 10,000 Nevadans signed up in just 48 hours. The second phase of the project began earlier this year, and so far a total of 35,000 genetic tests have been conducted.

People who choose to participate are assured that the privacy of their genetic data is maintained at all times. They get the results of their DNA test and they can choose to share it or not.
2018-11-19 00:00:00 Read the full story.

CloudQuant Thoughts… I would not want to surrender my own DNA even with the “privacy” guaranteed! But this is a fantastic study and wonderful to think that it is going to be extended into three more states.

 


NEWS OF THE WEEK

Google Cloud CEO Diane Greene steps down

(Reuters) — Former Oracle Corp. product chief Thomas Kurian will replace Diane Greene as head of the cloud division at Alphabet Inc’s Google in the coming weeks, Greene announced in a blog post on Friday, after a tumultuous year for the business. Greene said that when she joined Google three years ago, she planned to leave after two years. Now, she said, she will move into investing and philanthropy in January. She will remain on Alphabet’s board.

Kurian, who spent 22 years at Oracle and had been a close confidante of its founder Larry Ellison, resigned in September after struggling to expand its cloud business. Greene has served as chief executive of Google Cloud; Kurian will be senior vice president for Google Cloud, a company spokesman said. Google announced in February that the cloud division, which sells computing services, online data storage, and productivity software such as email and spreadsheet tools, was generating more than $1 billion in quarterly revenue. It faced a setback months later when thousands of Google employees revolted against Greene’s unit supplying the U.S. military with artificial intelligence tools to aid in analyzing drone imagery. Greene responded by announcing the deal would not be renewed.

The backlash over military work prompted an internal committee of top employees to issue companywide principles to govern the use of Google’s artificial intelligence systems, including a ban on using them to develop weaponry. The move essentially limited the cloud unit’s potential customer base.
2018-11-16 00:00:00 Read the full story.

Google Cloud Executive Who Sought Pentagon Contract Steps Down

Diane Greene, whose pursuit of Pentagon contracts for artificial intelligence technology sparked a worker uprising at Google, is stepping down as chief executive of the company’s cloud computing business.

Ms. Greene said she would stay on as chief executive until January. She will be replaced by Thomas Kurian, who oversaw product development at Oracle until his resignation in October. Ms. Greene will remain a board director at Google’s parent company, Alphabet.

The change in leadership caps a turbulent three years for Ms. Greene, who was brought on to expand Google’s cloud computing business. Google Cloud has struggled to make major inroads in persuading corporate customers to use its computing infrastructure over alternatives like Amazon’s A.W.S. and Microsoft’s Azure.
2018-11-16 00:00:00 Read the full story.

Google Cloud CEO Diane Greene stepping down, to be replaced by former Oracle cloud exec Thomas Kurian

Three eventful years after veteran tech executive and longtime Google board member Diane Greene was brought in to run the company’s cloud business, she announced plans to step down from that post Friday.

Greene will be replaced by Thomas Kurian, who ran Oracle’s cloud efforts until earlier this year when he left that company after reportedly clashing with mercurial co-founder and CTO Larry Ellison. She’ll stay with Google through the end of January, and Kurian will start at Google following the Thanksgiving break. “When I joined Google full-time to run Cloud in December 2015, I told my family and friends that it would be for two years. Now, after an unbelievably stimulating and productive three years, it’s time to turn to the passions I’ve long had around mentoring and education,” Greene wrote in a blog post.
2018-11-16 18:02:43-08:00 Read the full story.

 


Below the fold…

 

Amazon HQ2 Will Pay Dearly for Machine Learning and A.I. Talent

Amazon plans on hiring 25,000 workers for each of its new mega-offices (collectively dubbed “HQ2”) in New York City and Virginia. Not to be outdone, Google announced that it would add 14,000 workers to its already-massive office in New York City’s West Village. That’s great news for tech pros in those locations—provided they have the skills and experience that Amazon and Google actually want. Both companies are intent on capturing the market for artificial intelligence (A.I.) and machine learning applications; for example, Amazon recently announced that it assigned 10,000 employees to Alexa, its voice-activated digital assistant. Google also competes with Amazon Web Services (AWS) in the cloud-services arena, a burgeoning market that will increasingly leverage A.I. in coming years.

Over the past year, roughly one-fifth of Amazon and Google job postings have asked for machine-learning skills, according to an analysis by Burning Glass for the Wall Street Journal. Nationwide, only 3 percent of job postings ask for machine learning knowledge, which hints at just how hard Amazon and Google are leaning into these technologies. A.I. and machine learning are still relatively nascent disciplines, at least outside of university laboratories. Research and development costs are high, with many initiatives resulting in commercial products only after many years of work. In light of that, Amazon and Google are among the few tech firms that can even afford to hire and foster the necessary talent.

In New York City, both companies will pay a pretty penny for A.I. and machine learning experts, according to Dice’s database…
2018-11-15 00:00:00 Read the full story.

 

7 Lessons To Teach You All You Need To Know About Machine Learning

Before discussing the ways in which you can learn all you need to know about machine learning, we would like to discuss what the subject matter actually is. Machine learning is essentiallyteaching a computer how to make decisions with the help of relevant data. It is very important for the computer to be able to understand patterns without being fully programmed. The demand for machine learning is an all-time high. It is a skill set which you want to possess, especially in this computer savvy era. No matter if you want to be a software engineer, a business analyst or for that matter, a data scientist machine learning will lay a strong foundation for all that and more. It is very important to note, that data is king. From the smallest of companies to giant conglomerations, everyone wants to harness their data, so a course in machine learning will help you get a good job and if not that, then an internship at a good firm. This a one-of-a-kind course, which can be a lot of fun if you go about it right, so pull up your socks and we will point out 7 lessons which can teach you more about machine learning basics without spending a lot of money.
2018-11-19 09:30:25+00:00 Read the full story.

 

Google’s AI system can grade prostate cancer cells with 70% accuracy

Approximately one in nine men in the U.S. will develop prostate cancer in their lifetime, according to the National Cancer Institute, and more than 2.9 million patients diagnosed with it at some point are still alive today. And from a treatment perspective, it tends to be problematic — prostate cancer is frequently non-aggressive, making it difficult to determine which, if any, procedures might be necessary.

Google has made headway in diagnosing it, encouragingly, with the help of artificial intelligence (AI). In a paper (“Development and Validation of a Deep Learning Algorithm for Improving Gleason Scoring of Prostate Cancer“) and accompanying blog post, Google AI researchers describe a system that uses the Gleason score — a grading system that classifies cancer cells based on how closely they resemble normal prostate glands — to detect problematic masses in samples.

The goal, according to technical lead Martin Stumpe and Google AI Healthcare product manager Craig Mermel, was to develop AI that could perform Gleason grading objectively — and precisely. Human pathologists disagree on grades by as much as 53 percent, studies show.
2018-11-16 00:00:00 Read the full story.

 

Journalists at Wall Street Journal to be taught to identify “deepfakes”

The new technology, which MPs say “threatens democracy” and which has been described as a dangerous “propaganda weapon” are a threat to journalism, the Wall Street Journal said. A new internal taskforce is made up of photo, video, research and news editors trained in identifying false content online. Deepfakes is a term used to describe artificial intelligence that mimics facial expressions. It can be used to build sophisticated propaganda videos by making anyone say things they haven’t said with uncanny realism.

The production of most deepfakes is based on a technique called “generative adversarial networks” or GANs. This allows forgers to swap the faces of two people, for example a politician and an actor, to make an authentic impersonation.
2018-11-15 00:00:00 Read the full story.

 

Microsoft’s conversational AI has come a long way since Tay

Microsoft debuted a bevy of new services and product updates on Wednesday during its Artificial Intelligence (AI) in Business event, including an AI model builder for Power BI, its no-code business analytics solution; a Unity plugin for AirSim, its open source aerial informatics and robotics toolkit; and a public preview of its PlayFab multiplayer server platform. But two announcements stood out from the rest and underline the Seattle company’s impressive progress in the field of conversational AI. The first was the acquisition of XOXCO, creator of the Botkit framework that enables enterprises to create conversational bots for chat apps like Cisco Spark, Slack, and Microsoft Teams. The second was an open source system for creating virtual assistants — built on the Microsoft Bot Framework — that can answer basic questions, access calendars, and highlight points of interest.

The wealth of bot-building tools now available to developers within Microsoft’s ecosystem — buoyed by the company’s acquisitions earlier this year of conversational AI startup Semantic Machines, AI development kit creator Bonsai, and no-code deep learning service Lobe — have no doubt contributed to the growth of Azure, its cloud computing platform. Azure topped Amazon Web Services in Q1 2018 with $6 billion in revenue, and to date 1.2 million developers have signed up to use Azure Cognitive Services, a collection of premade AI models. That momentum would’ve been inconceivable just two years ago.

March 2016, you might recall, marked the launch of Tay, an AI chatbot Microsoft researchers set loose on Twitter as part of an experiment in “conversational understanding.” The more users conversed with Tay, the smarter it would get, the idea went, and the more natural its interactions would become. It took less than 16 hours for Tay to begin spouting misogynistic and racist remarks. Many of these were prompted by a “repeat after me” function that enabled Twitter users to effectively put words in the bot’s mouth. But at least a few of the hateful responses were novel: Tay invoked Adolf Hitler in response to a question about Ricky Gervais; transphobically referred to Caitlyn Jenner as “[not] a real woman”; and equated feminism with “cancer” and “[a] cult.”
2018-11-16 00:00:00 Read the full story.

 

How to Find an Entry-Level Job in Data Science

When it comes to relative newcomers in the Data Science field, there aren’t many out there doing better than Alyssa Columbus. Although she just graduated from college earlier this year, she already has a full-time data scientist role at Pacific Life, a laundry list of conference and symposium speaking engagements, and has founded a local group for women who code in R. Oh, and she’s a NASA Datanaut.

In other words, if you want advice on how to break into the industry successfully, she’s a good person to ask. And while some of her advice will probably sound familiar, she also said some things you really might not expect.
2018-11-13 15:00:00+00:00 Read the full story.

 

Embracing alternative data: a three-stage approach for portfolio managers

In the quest for alpha, a growing number of portfolio managers have been exploring the esoteric world of alternative data. Some forms, such as sentiment data based on terabytes of news texts and social media, are well established and have matured to the point of securing devoted followings. Other forms, such as satellite imagery that sheds light on industrial, commercial, consumer and agricultural activity, have had provisional successes but are not yet mainstream. What they all have in common is the prospect of trading insights that are only available to those market participants intrepid enough to try them.

Still, while large numbers of potential users may be curious, many of them have so far held off from taking the leap because it is not immediately clear how much work is involved in trying to take advantage of these new sources of information. The fear is that alternative data represents an uncomfortable departure from the way they’ve always operated. That fear is misplaced…
2018-11-14 11:41:56 Read the full story.

 

Weekly Selection — Nov 16, 2018 – Towards Data Science

 

  • A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning
  • How to train Neural Network faster with optimizers?
  • Whose fault is it when AI makes mistakes?
  • A Bayesian Approach to Time Series Forecasting
  • The xtensor vision
  • Tic Tac Toe — Creating Unbeatable AI with Minimax Algorithm
  • Modeling: Teaching a Machine Learning Algorithm to Deliver Business Value
  • Predicting Professional Players’ Chess Moves with Deep Learning

2018-11-16 13:02:31.892000+00:00 Read the full story.

 

Explainable AI (XAI) won’t deliver. Here’s why. – Hacker Noon

Let’s talk about interpretability, transparency, explainability, and the trust headache. Explainable AI (XAI) is getting a lot of attention these days and if you’re like most people, you’re drawn to it because of the conversation around AI and trust. If so, bad news: it can’t deliver the protection you’re hoping for. Instead, it provides a good source of incomplete inspiration.

Before we get caught up in the trust hype, let’s examine the wisdom of a sentence I’ve been hearing a lot lately: “in order to trust AI, we need it to be able to explain how it made its decisions.” Complexity is the reason for all of it. Some tasks are so complicated you cannot automate them by giving explicit instructions. AI is all about automating the ineffable, but don’t expect the ineffable to be easy to wrap your head around. The point of AI is that by explaining with examples instead, you can dodge the headache of figuring out the instructions yourself. That becomes the AI algorithm’s job.
2018-11-16 22:58:04.282000+00:00 Read the full story.

 

Big Data, Big Overhead or Bad Math? – codeburst

In 2015 The Economist wrote an article about big data — Data,data everywhere. Now, if any magazine should be numerate I would think it would be The Economist (a magazine I should add that I subscribe to and am normally a great admirer of). However, in this case it seems they failed to do the math and got taken in by the big data hype machine.

This article says that Walmart has collected 2.5 petabytes of data on customer transactions to be used for data mining. Gathering large amounts of retail data has been a hot topic for a few years. Everyone talks about the discovery that people who buy diapers often buy beer. (See Beer and Diapers: The Impossible Correlation). This has been cited in thousands of articles as an example of the magical knowledge that one can mine from big data.

2018-11-18 16:58:44.946000+00:00 Read the full story.

 

4 Reasons All Data Scientists Should Be Skilled in Psychology

Data science was recently named “the sexiest job of the 21st-century by Harvard Business Review. Today, enterprises that range from start-ups to Fortune 500 companies are aggressively seeking the most talented data science specialists on the market. These experts help companies make sense of massive amounts of information, solve complex problems and improve operations. Data science experts who are trained in psychology are hot prospects. There’s a rapidly growing need among business leaders to gain a better understanding of human behavior, prompting many to employ the services of data science specialists who trained in psychology. The dual modality specialists help them to stand out among information experts who lack training in the understanding the human mind. This has encouraged a segment of data experts to return to school to add psychology training to their career toolkit. Each time that a consumer uses a search engine or sign up for a newsletter or brand loyalty program, enterprises record the interaction. As a result, the enterprises amass an enormous amount of data, making the subject of information management and utilization a common point of discussion among today’s business leaders. The following 4 sections highlight 4 more reasons why data scientists should be skilled in psychology.

  1. Understanding Human Behavior Helps You Excel
  2. Technology Is Driven by the User and Their Mind
  3. Success Is Driven by Emotional Depth
  4. Psychological Understanding Builds Mindfulness

2018-11-17 11:30:22+00:00 Read the full story.

 

Can Predictive Analytics Help Improve Your Instagram Strategy?

Predictive analytics and big data can go a long way in benefiting your social media presence. Here’s how you can use it to improve your Instagram strategy.

I have wrote extensively on the benefits of using predictive analytics and otherbig data technologies in marketing. One of the topics I have barely touched on is the benefits of using predictive analytics in Instagram, and how it can help you improve your Instagram strategy. I felt like it was important to discuss it in this piece, because Instagram is becoming a more important marketing vehicle than ever.
2018-11-17 19:19:47+00:00 Read the full story.

 

RIP wordclouds, long live CHATTERPLOTS – Towards Data Science

The data viz deficiencies of wordclouds have been well covered, as have alternative approaches, so I wont belabor those points here.

Despite this, wordclouds remain as commonplace as they are confusing. In many ways they remain the defacto standard for text-data visualization, at least in less technical analysis (as evidenced & likely exacerbated by the abundance of easy to use online tools). This truly makes them the pie chart of natural language. (As such, both still have their place, despite being almost always inferior to other options).
2018-11-19 03:24:27.706000+00:00 Read the full story.

 

Mark Zuckerberg’s ‘war’ footing at Facebook driving out executives

Mr Zuckerberg has also reportedly experienced tensions with his key deputy, chief operating officer Sheryl Sandberg. The Wall Street Journal reported that she had confided in friends that she is worried about her job. Mr Zuckerberg has also told Ms Sandberg to allocate additional resources for taking down abusive content on its site, which has proved a challenge that its artificial intelligence systems are still adapting to.
2018-11-19 00:00:00 Read the full story.

 

‘Crypto hangover’ hammers Nvidia’s outlook, shares drop 17 percent

Chip designer Nvidia Corp (NVDA.O) on Thursday forecast disappointing sales for the holiday quarter, pinning the blame on unsold chips piling up with distributors and retailers after the evaporation of the cryptocurrency mining boom. The Santa Clara, California-based company also posted sales that missed expectations for its third quarter. Shares plunged nearly 17 percent in late trading to $168.32.

For decades, Nvidia has supplied gaming cards to boost computer graphics, but in recent years cryptocurrency miners adopted the company’s cards to turn bits into wealth. Chief Executive Jensen Huang said prices for Nvidia’s gaming cards had risen with the cryptocurrency frenzy, driving some buyers away. As the frenzy receded and card prices came down, Nvidia expected sales volumes to grow again as buyers who were priced out came back. But that process was slower than Nvidia expected, Huang said, saying he expects inventories to be at normal levels by the end of the current quarter. “The crypto hangover lasted longer than we expected,” Huang said on a conference call. “We thought we had done a better job managing the cryptocurrency dynamics,” he later added.

As a result, Nvidia stopped shipping some of its mid-priced chips to retailers, where they are stacking up in warehouses and the backs of stores. The company said its provision for inventories expanded more than five-fold in the fiscal third quarter to $70 million, and that the same provision had more than tripled for the first nine months of its fiscal year to $124 million.
2018-11-16 13:10:33+00:00 Read the full story.

 

Nvidia: Bad News Already Priced into Shares, Says Susquehanna

In a note to clients, Susquehanna analyst Christopher Rolland lifted his rating on shares of the Santa Clara, Calif.-based tech company from neutral to positive, arguing that the recent sell-off in Nvidia stock, amid larger weakness in the chip sector and even broader turmoil for global markets, is overblown, as outlined by MarketWatch. Rolland attributed his upbeat forecast to the chip maker’s long-term opportunities in various high-growth markets.

Susquehanna is particularly bullish on Nvidia’s position in the Artificial Intelligence inferencing space — which the chipmaker describes as “taking smaller batches of real-world data and quickly coming back with the same correct answer” — a market that analysts expect will grow to reach $6.5 billion by 2025. “While inferencing is mainly addressed by CPUs today, recent GPU and FPGA platforms appear to be bona fide challengers,” wrote the Susquehanna analyst.
2018-11-14 11:10:00-07:00 Read the full story.

 

Will Machines Run The World? Steve Wozniak Gives His Thoughts – OpenMarkets

Not to worry, according to Steve Wozniak, co-founder of Apple. Machines guided by artificial intelligence (AI) do not wake in the morning and ask: “What should I do today?” That is what humans do. AI will allow for amazing advances in how machines can handle specific tasks, including very complex and inter-related tasks.

If Moore’s Law of technological advancement holds for AI, there will be job disruptions that arrive ever more rapidly. As AI-guided machines take over tasks, so humans can focus on other endeavors. This has critical policy and economic implications; however, worrying about machines becoming human is not one of them.
2018-11-16 15:59:20+00:00 Read the full story.

 

After overtaking Apple in smartphones, Huawei is aiming for No. 1 by 2020

…Huawei also designs its own artificial intelligence chips which appear in its smartphones, much like Apple does. For Yu, AI will be a key technology that will take smartphones to the next level and help the company grow in the future.

“AI is coming. AI will be the engine for all the future services. AI will be elementary to working on many devices, it will connect all the apps, you can get all the services from this AI capability. The biggest changes in the next 10 years will be AI-enabled phones capability,” Yu told CNBC.
2018-11-16 00:00:00 Read the full story.

 

What Should You Know About Neural Networks?

Neural networks, which are more properly referred to as artificial neural networks, are computing systems that consist of highly interconnected simple processing elements. The structure of such systems resembles the way neurons connect to each other in the human brain. The artificial intelligence industry grows fast these days, and neural networks make it possible to perform tasks which involve the process called deep learning. Our brain consists of millions of neurons, and neural networks also have such basic units — perceptrons. Every perceptron works on simple signal processing and is connected to a large network of other units.

Neural networks can learn by analyzing numerous training examples. For example, machines have to analyze millions of handwritten digits to recognize them. Although such a task looks simple for humans, it’s important to understand that we recognize handwritten symbols only thanks to 140 million neurons in our visual cortex. Visual recognition becomes an extremely difficult task when it comes to programming machines. Such a program should take into account millions of exceptions and special cases. Analyzing training examples, a neural network can automatically adjust rules for recognizing symbols. Moreover, the more examples a neural network has, the more accurate the recognition process.
2018-11-14 09:16:22+00:00 Read the full story.

 

How to start using AI to combat money laundering

According to a recent Europol report, combatting global money laundering continues to be a challenge. “The banks are spending $20bn a year to run the compliance regime … and we are seizing one percent of criminal assets every year in Europe”, Rob Wainwright, director of Europol told Politico recently. At an institutional level, the volume, variety, availability and location of data, along with changing regulations and fraud schemes keeps many anti-money laundering (AML) compliance officers up at night. With this in mind it is not surprising that many in the field are looking to new solutions that include artificial intelligence (AI) and, more specifically, machine learning (ML) to help propel the effectiveness and performance of their AML programs.

However, while there is much enthusiasm for the promise of what new technology can offer, there is also much hesitation since many don’t understand how AI and ML work and what it can and will do for them. Before getting into what the technology can do for AML programs it is worth noting that the one thing AI and ML will not do is completely replace the need for people. Human or natural intelligence has to be coupled with machine intelligence in an effective AML program. However, what machine intelligence can do is look at both structured and unstructured data to detect patterns or clusters of activity that may be out of the ordinary and potentially fraudulent.
2018-11-19 00:00:00 Read the full story.

 

U.S. Treasury Data in Focus

U.S. Treasury traders are increasingly exploiting data to achieve best execution and optimize transaction cost analysis — that is, to do their jobs better. The government-debt market has some catching up to do in this regard, as equities and FX are further along the curve in having trade information publicly available. Given that market data is a mosaic — the numbers from one trade aren’t especially valuable, but a year’s worth of data is — trading and investing firms are trying to build for the future.

“People are starting to focus on a data strategy, which is really important,” said Josh Holden, Chief Information Officer for OpenDoor Securities, which operates a trading platform for off-the-run Treasuries and TIPS. “If you’re going to want to evaluate something two or three years from now, you need to capture the data now. If you don’t, you can’t go back and recreate it.” According to a Greenwich Associates report published in December 2017, of $487 billion in U.S. Treasuries that trade daily on average, 69% is executed electronically. While traders can find ample market data on the most recently issued benchmark 10-year note, the breadth of the Treasury market — spanning more than 350 CUSIP numbers — leaves a lot of information gaps.
2018-11-16 19:42:39+00:00 Read the full story.

 

Fail-Safe AI and Self-Driving Cars

Fail-safe. If you’ve ever done any kind of real-time systems development, and especially involving systems that upon experiencing a failure or fault could harm someone, you are likely aware of the importance of designing and building the system to be fail-safe. For many AI developers that have cut their teeth developing AI systems in university research labs, the need to have fail-safe AI real-time systems has not been a particularly high priority. Often, the AI system being built is considered relatively experimental and intended to tryout new ways of advancing AI techniques and Machine Learning (ML) algorithms. There is not much need or concern in those systems about ensuring a fail-safe AI capability.

For AI self-driving cars, the auto makers and tech firms are at times behind the eight ball in terms of devising AI that is fail-safe. With the rush towards getting AI self-driving cars onto the roadways, it has been assumed that the AI is going to work properly, and if it doesn’t work properly that a human back-up driver in the vehicle will take over for the AI. I’ve repeatedly cautioned and forewarned that using a human back-up driver as a form of “fail-safe” operation for the AI is just not sufficient. Human back-up drivers are apt to lose attention toward the driving task and not be fully engaged when needed. Also, human back-up drivers might not be aware that the AI is failing or faltering and therefore not realize they should be taking over the driving controls. Even if the human back-up driver somehow miraculously realizes or is alerted to take over the driving task, the human reaction time can be ineffective and the time for evasive action might already be past.
2018-11-13 15:45:06+00:00 Read the full story.

 

Public Attitudes Toward Computer Algorithms

Algorithms are all around us, utilizing massive stores of data and complex analytics to make decisions with often significant impacts on humans. They recommend books and movies for us to read and watch, surface news stories they think we might find relevant, estimate the likelihood that a tumor is cancerous and predict whether someone might be a criminal or a worthwhile credit risk. But despite the growing presence of algorithms in many aspects of daily life, a Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations.

This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do.
2018-11-16 00:00:00 Read the full story.

 

California hopes high-tech cameras can help stop fires before they grow

…Newsom, who is California’s current lieutenant governor and will take the full reins of power in Sacramento in early January, is well known to have a keen interest in technology. Sources say he’s spoken to executives of some of the utilities as well as some artificial intelligence experts about how to use high-tech more to combat wildfire threats…
2018-11-16 00:00:00 Read the full story.

 

Privacy concerns as Google absorbs DeepMind’s health division

Privacy advocates have raised concerns about patients’ data after Google said it would take control of its subsidiary DeepMind’s healthcare division. Google, which acquired London-based artificial intelligence lab DeepMind in 2014, said on Tuesday that the DeepMind Health brand, which uses NHS patient data, will cease to exist and the team behind its medical app Streams will join Google as part of Google Health. It comes just months after DeepMind promised never to share data with the technology giant and an ethics board raised concerns over its independence. A separate research team at DeepMind will continue to function independently of Google, but under the umbrella of its parent company Alphabet. A DeepMind spokesman said: “All patient data remains under our partners’ strict control, and all decisions about its use lie with them.”
2018-11-13 00:00:00 Read the full story.

 

MIT researchers show how to detect and address AI bias without loss in accuracy

Bias in AI leads to poor search results or user experience for a predictive model deployed in social media, but it can seriously and negatively affect human lives when AI is used for things like health care, autonomous vehicles, criminal justice, or the predictive policing tactics used by law enforcement. In the age of AI being deployed virtually everywhere, this could lead to ongoing systematic discrimination.

That’s why MIT Computer Science AI Lab (CSAIL) researchers have created a method to reduce bias in AI without reducing the accuracy of predictive results. “We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,” said MIT professor David Sontag in a statement shared with VentureBeat. The paper was written by Sontag together with Ph.D. student Irene Chen and postdoctoral associate Fredrik D. Johansson. The key, Sontag said, is often to get more data from underrepresented groups. For example, the researchers found in one case an AI model was twice as likely to label women as low-income and men as high-income. By increasing the representation of women in the dataset by a factor of 10, the number of inaccurate results was reduced 40 percent.
2018-11-16 00:00:00 Read the full story.

 

Cat or Dog? Introduction to Naive Bayes – Towards Data Science

If you’ve read any introductory books or articles on Machine Learning, you’ve probably stumbled upon Naive Bayes. It’s a popular and easy to understand classification algorithm.

Naive Bayes is a classification algorithm. A complicated name to say that given an example Naive Bayes is able to assign a class to it, like putting a label on it saying cat or dog, if it sees an image of a cat or an image of a dog.

It is also part of a family of algorithms called supervised learning algorithms. It’s the type of algorithm that learns by looking at examples that are properly classified. In machine learning parlance, each example is a set of features, i.e., attributes that describe that specific example. The set of examples the algorithm uses to learn is called the training set, and the new and never seen before examples it uses to check how good it is at classifying is called the test set. To which the algorithm ends up assigning a class or label.

Naive Bayes is a probabilistic classifier, as well. The class or label the algorithm learns to predict is the result of creating the probability distribution of all the classes it is shown, and then deciding which one to assign to each example. Probabilistic classifiers look at the conditional probability distribution, i.e., the probability of assigning a class given a specific set of features.
2018-11-19 03:24:52.752000+00:00 Read the full story.

 

Machine Learning for Operations

Managing infrastructure is a complex problem with a massive amount of signals and many actions that can be taken in response; that’s the classic definition of a situation where machine learning can help. Adoption of MLOps or AIOps (as Gartner has christened this trend) has been slow, perhaps because making the most of them requires automation to apply the recommendations and at least in part because IT is naturally conservative due to the need to ensure availability. Silos between IT teams, like separating service management and performance management, also makes it hard to gather all the necessary data for effective machine learning. But the potential is significant and interest is growing.

It’s not just that AIOps can help with availability and performance monitoring, event correlation and analysis, IT service management, help desk and customer support, and infrastructure automation. It’s also part of the general ‘shift left’ DevOps trend where operations become an integrated part of app development and delivery. That means becoming increasingly response and proactive, but it also means improving the communications and coordination between teams, and connecting the data silos. With more applications to operationalize, monitor and support, and more of these using microservices and containers and cloud services that multiply the amount of infrastructure that needs attention, machine learning is becoming a key tool in keeping up.
2018-11-15 10:52:58+00:00 Read the full story.

 

The Growing Significance Of DevOps For Data Science

Data science and machine learning are often associated with mathematics, statistics, algorithms and data wrangling. While these skills are core to the success of implementing machine learning in an organization, there is one function that is gaining importance – DevOps for Data Science. DevOps involves infrastructure provisioning, configuration management, continuous integration and deployment, testing and monitoring. DevOps teams have been closely working with the development teams to manage the lifecycle of applications effectively.

Data science brings additional responsibilities to DevOps. Data engineering, a niche domain that deals with complex pipelines that transform the data, demands close collaboration of data science teams with DevOps. Operators are expected to provision highly available clusters of Apache Hadoop, Apache Kafka, Apache Spark and Apache Airflow that tackle data extraction and transformation. Data engineers acquire data from a variety of sources before leveraging Big Data clusters and complex pipelines for transforming it. Data scientists explore transformed data to find insights and correlations. They use a different set of tools including Jupyter Notebooks, Pandas, Tableau and Power BI to visualize data. DevOps teams are expected to support data scientists by creating environments for data exploration and visualization.
2018-11-14 10:44:27+00:00 Read the full story.

 

How Alt Asset Managers Respond to Evolving Investor Demands

…Hedge fund managers are rapidly embracing artificial intelligence in the front office, with 29% reporting using AI, compared with just 10% in last year’s survey. The same is true for next-generation data, as 70% of hedge fund managers are using, or expect to use alternative data within their investment process. According to EY, managers view the use of nontraditional data and/or AI as an opportunity to enhance their investment process and differentiate themselves in a competitive landscape.

“Not only do managers clearly see the benefit of AI and alternative data in helping them gain a competitive edge, but investors are actively seeking out managers that are exploring new innovations to deliver alpha,” Dave Racich, co-leader of EY’s global hedge fund services, said in the statement. “Not long ago, we were only talking about quantitative managers utilizing these techniques; however, we continue to see increased adoption and use cases across all strategies.”

The AI adoption rate among private equity firms is much lower, with three quarters of respondents saying they did not use AI and had no intention to do so, according to the survey. EY noted, however, that many larger private equity managers are investing in big data to help identify investment opportunities while providing analysis into pricing trends that guide buyout negotiations. In the back and middle office, both hedge funds and private equity continue to invest in technology solutions, the survey found. However, the sophistication of the technology investments is more pronounced with hedge fund managers who are ahead in using AI and robotics to automate a variety of processes.
2018-11-18 00:00:00 Read the full story.

 

What To Know About The Impact of Data Quality and Quantity In AI

Believe it or not, there is such a thing as“good data”and “bad data” — especially when it comes to AI. To be more specific, just having data available isn’t enough: There’s a distinction worth making between “useful” and “not-so-useful” data. Sometimes data must be discarded on sight because of how or where it got collected, signs of inaccuracy or forgery and other red flags. Other times, data can get processed first, then passed on for use in artificial intelligence development.

A closer look at this process reveals a symbiotic relationship between our ability to gather data and process it — and our ability to build ever-smarter artificial intelligence. Data and machine learning both power AI, and AI, in turn, delivers more sophisticated machine learning tools. It’s a perfect system that has implications for businesses of every type and size, not to mention statisticians and scientists.
2018-11-17 09:30:12+00:00 Read the full story.

 

Exascale Supercomputers and AI Self-Driving Cars

…For those of us in the AI field, we generally tend to believe that aiming at neurons is a better shot at ultimately trying to have a computer that can do what the human brain can do. Sure, you can simulate a neuron with a FLOPS mode conventional processor, but do we really believe that simulating a neuron in that manner will get us to the same level as a human brain? Many of the Machine Learning (ML) and Artificial Neural Network (ANN) advocates would say no….
2018-11-16 15:45:39+00:00 Read the full story.

 

UN expert claims ethical artificial intelligence is a pressing need

A research fellow on Emerging Cyber technologies at United Nations University (UNU) stated that even though digital solutions like Artificial Intelligence (AI) are transforming lives, it also raises concerns, ranging from security to human rights abuses. The expert, Eleonore Pauwels said that in this modern world AI is transforming human lives, from reshaping the intimate and networked interactions to monitoring human bodies as well as the moods and emotions, both invisible and invisible ways. As per UN news, Pauwels stated, “The all-encompassing capture and optimisation of our personal information — the quirks that help define who we are and trace the shape of our lives — will increasingly be used for various purposes without our direct knowledge or consent.”

She also stressed that how to protect independent human thought in an increasingly algorithm-driven world, goes beyond the philosophical and is now an urgent and pressing dilemma. Pauwels added that the evolution of AI is happening in parallels with technical advances in other fields, such as neuroscience, epidemiology and genomics and it means that “not only is your coffee maker sending information to cloud computers, but so are wearable sensors like Fitbits; intelligent implants inside and outside our bodies; brain-computer interfaces, and even portable DNA sequencers.”
2018-11-17 21:12:26+08:00 Read the full story.

 

Walmart’s New Sam’s Club Mimics Amazon Go

almart (NYSE:WMT) plans to open a new Sam’s Club store to be a test center for retail technology that will be very much like the Amazon Go convenience store from Amazon.com(NASDAQ:AMZN). Customers will be able to skip the checkout line by scanning their merchandise using scan-and-go technology. All you’ll need to shop in one is a Sam’s Club membership. This research center isn’t quite as advanced as Amazon Go, since the e-commerce giant’s store identifies the items you’ve selected and charges you, while the Walmart version requires you to scan the items to input them. But Walmart’s plan is also more affordably scalable, though the company apparently has no plans to open more than just this one, which will be in Dallas.

This new technologically advanced retail store will be called Sam’s Club Now and was first announced back in June, with Walmart calling it “the epicenter of innovation for Sam’s Club.” Using computer vision, augmented reality, machine learning, artificial intelligence, and robotics, Walmart is looking to test out concepts in this one store before rolling them out nationally after they’ve proved successful.
2018-11-12 22:32:46-05:00 Read the full story.

 

What is Amazon Elastic Block Store (EBS)? – Hacker Noon

Amazon EBS is like a hard drive in the cloud that provides persistent block storage volumes for use with Amazon EC2 instances. These volumes can be attached to your EC2 instances and allow you to create a file system on top of these volumes, run a database, server or use them in any other way you would use a block device.

A block storage volume works similarly to a hard drive. You can store any type of files on it or even install a whole Operating System on it. EBS volumes are placed in an availability zone, where they are automatically replicated to protect data loss from the failure of a single component. But since they are replicated only across a single availability zone you may lose data if the whole availability zone goes down, which is really rare.

There are five types of EBS volumes. You can use whatever works best for your use case.
2018-11-16 10:41:01.007000+00:00 Read the full story.

 

Filter, Aggregate and Join in Pandas, Tidyverse, Pyspark and SQL

One of the most popular question asked by inspiring data scientists is which language they should learn for data science. The initial choices are usually between Python and R. There are already a lot of conversations to help make a language choice. Selecting an appropriate language is the first step, but I doubt most people end up using only one language. In my personal journey, I learned R first, then SQL, then Python, and then Spark. Now in my daily work, I use all of them because I find unique advantages in each of them (in terms of speed, ease of use, visualization and others). Some people do stay in only one camp (some Python people never think about learning R), but I prefer to use what available to create my best data solutions, and speaking multiple data languages help my collaboration with different teams.

As the State of Machine Learning points out, we face a challenge of supporting multiple languages in data teams, one down side of working cross languages is that I confuse one with another as they may have very similar syntax. Respecting the fact that we enjoy a data science ecosystem with multiple coexisting languages, I would love to write down three common data transformation operations in these four languages side by side, in order to shed light on how they compare at syntax level. Putting syntax side by side also helps me synergize them better in my data science toolbox. I hope it helps you as well.
2018-11-19 03:25:35.456000+00:00 Read the full story.

 

Data Science And Robotics: The Next Big Area Of Study?

More than 53-percent of the world’s enterprises leverage big data technology. This is an enormous leap from only 17-percent in 2015. More companies are taking advantage of data science technologies to streamline their operations and improve their organizational structures. As a result, there’s a surge in exciting career opportunities for data scientists. Now is the time enter the fascinating data science field. Today, there are many resources to prepare for a career in the discipline, such as the wealth of online learning opportunities. There’s an enormous demand for companies to find the right IT specialists and create a strong team. Business leaders need experts with analytical skills who are creative, innovative and proficient in working with digital information.

2018-11-14 18:48:49+00:00 Read the full story.

 

Cyber attacks, AI and swarms of bees: Telegraph readers reflect on the future of war

As the nation reflected on the centenary of the end of the First World War, the Telegraph’s special correspondent Harry de Quetteville examined what global warfare will look like over the course of the next 100 years. In the piece, he outlined the transformative technologies, including artificial intelligence, genomics and cyber warfare, that will impact future global conflicts.

The far-reaching capabilities of these new technologies had our readers taking to the comments section to offer their own predictions and analysis. The implications of fully autonomous weapons on civilian as well as military targets and the threat posed by increasingly sophisticated cyber attacks were among the key concerns of our readers. Read on to see a selection of the best reader comments on the future of war as well as additional commentary from Harry de Quetteville.
2018-11-15 00:00:00 Read the full story.

 

Software robot startup UiPath lands funding from Madrona and others, plans Seattle-area expansion

New York-based UiPath, a startup specializing in Robotic Process Automation, a new arena also known as “software robots,” has reeled in a fresh round of cash from IVP, Madrona Venture Group and Meritech Capital, extending its Series C financing round to $265 million, up from $225 million in September. Along with the additional investment, UiPath said it is looking to “significantly scale up” its research-and-development office in Bellevue, Wash., announcing the hiring of new senior engineering and business leaders from Microsoft and K2 in the Seattle region.

Previous investors in the round were CapitalG, Sequoia Capital and Accel. Some of the additional cash will be used to grow the business, while other portions of the financing round provided an opportunity for employees to cash out some of their shares. The new investors “will no doubt help us continue to define and lead the Automation First era for customers everywhere,” said UiPath CEO and co-founder Daniel Dines in a statement.

Typically, Seattle-based Madrona focuses on earlier-stage investment opportunities, but the venture firm cited its strong interest in Robotic Process Automation and artificial intelligence in making the investment. Param Kahlon, UiPath chief product officer, writes in a blog post this morning, “As we build out our Automation First strategy with a focus on applying AI to enhance employee productivity, the AI expertise in the greater Seattle region and at Madrona will be crucial to automating work processes. This region is clearly a leader in advanced AI expertise and we are excited to be here building out a product and R&D focused team.”
2018-11-14 14:15:09-08:00 Read the full story.

 

Intel Proposes Federal Privacy Bill To Protect Personal Data

The prospects for a federal privacy bill actually being signed into law in the United States in 2019 just took another big step forward with new draft legislation from Silicon Valley chipmaker Intel, which is now calling for public debate and commentary. If this privacy bill proposed by Intel eventually finds a sponsor in the U.S. Congress, that could pave the way for omnibus consumer data privacy legislation at the federal level that would impose significant penalties on any U.S. company that fails to provide reasonable safeguards for protecting personal information and data.
2018-11-16 00:00:00 Read the full story.

 

Machine learning vs front end developers: Future of UI design

A well-designed desktop or mobile website doesn’t just appear overnight. In fact, you need to go through a long process before you can publish your website. From concept sketches to a production-ready web app can, the entire process can take many months. But what if there were an easier, faster, and more efficient way to code and test a website layout? Thanks to machine learning, specifically deep learning, that dream is now a reality. Experts predict that machine learning and AI technology will change the process of front-end design for web designers in just a few short years. Of course, the question then arises: what will happen to front-end developers once machine learning becomes the norm for web programming?
2018-11-13 10:09:49+00:00 Read the full story.

 

Experts cast doubt on whether China’s news anchor is really A.I.

Over the past week, the “world’s first” artificial intelligence (AI) news anchor has gone viral online, a robot version of a presenter at China’s state Xinhua News Agency. Lauded for “his” ability to broadcast 24 hours a day, the presenter said he would “work tirelessly to keep you informed.” The anchor was developed by Xinhua and Chinese search engine Sogou.com and launched at the World Internet Conference last week.

But is this actually a true example of AI? Will Knight, a senior editor for AI at MIT Technology Review, is somewhat skeptical. “The use of the term AI is a little bit tricky in this context, because the anchor itself is not intelligent, it doesn’t have any intelligence … But they are using some quite clever kind of machine learning which is a sub-field of AI to capture the likeness of a real anchor and the voice of that anchor,” Knight told CNBC by phone. When Knight first saw the anchor, he thought it was an impressive piece of mimicry. “The underlying technology for learning how to reproduce faces and voices is quite a sort of fundamental idea, and a quite powerful one potentially.”
2018-11-16 00:00:00 Read the full story.

 

Accenture’s Advice on How to Get Started with Automation

As artificial intelligence and machine learning offerings grow in popularity and move into production systems, enterprises are increasingly looking to utilize these technologies to automate critical business processes–from infrastructure maintenance to data management to customer services and beyond.

While there can be significant, tangible value in embracing automation technologies, realizing their full potential requires more than just flipping the “on” switch. Your automation journey should be well thought out ahead of time, attuned to the specific needs of your organization.

In this eWEEK Data Point article, Brian Sullivan , Managing Director of Oracle Technology Delivery at Accenture, provides six steps that he believes are important to developing and executing a successful automation strategy.
2018-11-16 00:00:00 Read the full story.

 

How Toyota Is Using Artificial Intelligence, Big Data and Robots

With initial funding of $100 million, Toyota AI Ventures invests in tech start-ups and entrepreneurs around the world that are committed to autonomous mobility, data and robotics. Toyota’s investments help accelerate getting critical new technologies to market. One of the organization’s investments is in May Mobility, a company that is developing self-driving shuttles for college campuses and other areas such as central business districts where low-speed applications are warranted. This is just one of the services that could blaze a trail to fully autonomous vehicles of the future.

Additionally, Silicon-based Toyota AI Ventures contributed funding as well as mentorship, incubation facilities and validation to Nauto, a company that’s creating a shared data platform to prevent accidents caused by distracted driving; SLAMcore, a visual tracking and mapping algorithm developer for smart tech; Intuition Robotics, an organization that creates social companion technologies that are accessible and intuitive for seniors;Boxbot, a company that’s building self-driving delivery robots; and more. Like many disruptors, Toyota AI Ventures seeks out other innovators to tackle important challenges to propel the latest technologies.

Innovation has always been omnipresent at Toyota from its earliest days and it’s clear the company is continuing that innovative tradition. While Toyota was originally a company that produced wooden hand looms, the majority of people know the company for its automobile division. Their aim is to use artificial intelligence (AI) technology to make “cars an object of affection again” as soon as 2020…
2018-11-13 15:30:23+00:00 Read the full story.

 

What To Know About How Big Data Is Affecting Social Media

A chief generator of big data is social media. With billions of active users on social platforms daily, artificial intelligence, automation bots, and analytics collectors glean copious amounts of information that can be used to improve marketing schemes and make the user experience better online. In turn, this changes the social media experience. With the influence of big data, businesses and consumers alike enjoy a new experience catered by the constant influx of data.
2018-11-17 19:27:11+00:00 Read the full story.

 

Microsoft acquires bot studio XOXCO to boost conversational AI offerings

Microsoft has acquired XOXCO, an Austin-based company that builds bots and conversational artificial intelligence tools, the latest in a series of deals to boost the tech giant’s AI bonafides. XOXCO is a five-year-old company that was responsible for Howdy, the first commercially available bot on Slack that helps users schedule meetings. Microsoft and XOXCO have teamed up together in the past, and the company also built Botkit, which provides developer tools to users on GitHub.

In a blog post, Microsoft said it sees a “world where natural language becomes the new user interface,” helping people be more productive. The acquisition will help beef up the Microsoft Bot Framework service, which supports more than 360,000 developers today. The deal for XOXCO comes amid a series of announcement out of an AI event in San Francisco today. As part of that, Microsoft unveiled a new set of open-source tools to help businesses build out their own branded digital assistant. This development shows Microsoft is interested in getting customers to use its developer tools under the hood just as much, if not more, than adopting its own digital assistant Cortana.
2018-11-14 16:36:18-08:00 Read the full story.

 

AI Experts Discuss Innovation, Limitations in the Workplace

According to two experts at the Techonomy conference in Half Moon Bay, Calif., on Nov. 12, the threat and solutions of artificial intelligence eliminating certain jobs done by humans aren’t well-understood. Paul Daugherty, chief technology and innovation officer at Accenture, has a positive approach to AI innovation in the workplace, saying, “About 15 percent of jobs will be completely replaced, but the majority of jobs will be improved.”

Meanwhile, CEO of Sinovation Ventures Kai-Fu Lee believes AI has limitations and cannot be compassionate. “AI is a tool; it cannot be creative. … Anyone’s work that is purely routine—those jobs will be replaced by AI.” Lee also said managing the development and deployment of AI is key. “There are still issues to be solved—safety, privacy and job displacement,” he said. “But the idea of AI becoming some kind of super-intelligence taking over the world is overblown; it won’t happen.”
2018-11-16 00:00:00 Read the full story.

 

Where to Study Robotics: a List of Free Courses

Robotics is an engineering discipline focused on designing and manufacturing of robots. This area of engineering is related to computer science, mechatronics, artificial intelligence, and bioengineering. Science-fiction writer Isaac Asimov is widely considered the first man to use the term “robotics.” He also was the author of its three main principles, or Three Laws of Robotics:

  1. Robots shouldn’t harm humans.
  2. Robots should follow the given instructions without violating the first rule.
  3. Robots should protect themselves without violating two other rules.

Robots have many applications, performing various tasks which are usually performed by humans. They build cars in assembly lines, disarm explosive devices, explore the bottom of the ocean and space. You can also buy a robot cleaner which will move around your home and clean the floor. The robotics industry evolves all the time, and engineers constantly discover new ways robots can help us solve various problems.

Today, many people realize how important it is to learn robotics. Knowledge of robotics not only can give you an interesting profession but also can help you develop and improve various skills. For example, when learning robotics, you can also learn creative thinking, concentration, programming, teamwork, and get more prepared for the future of technology in general.
2018-11-14 09:30:17+00:00 Read the full story.

 

Blackberry is buying an AI and cybersecurity company for $1.4 billion

The once-king of the mobile landscape is acquiring Cylance for $1.4 billion in cash, Reuters reports. Cylance develops artificial intelligence products that help to prevent cyberattacks on companies. However, Blackberry will mainly use Cylance to boost its QNX unit’s capabilities, which makes software for next-generation self-driving cars.
2018-11-16 06:43:02 Read the full story.

BlackBerry Jumps Into Security With $1.4B Cylance Acquisition

A week after the first rumors of the acquisition began to surface, BlackBerry confirmed on Nov. 16 that it is acquiring cyber-security vendor Cylance for $1.4 billion in cash. The deal is expected to close before BlackBerry’s fiscal year end in February 2019. Cylance is best known for its suite of endpoint detection and response (EDR) security capabilities, including the company’s Protect, Optics, ThreatZero and Smart Antivirus products.

“Cylance’s leadership in artificial intelligence and cybersecurity will immediately complement our entire portfolio, UEM and QNX in particular,” John Chen, executive chairman and CEO of BlackBerry, wrote in a statement. “We believe adding Cylance’s capabilities to our trusted advantages in privacy, secure mobility, and embedded systems will make BlackBerry Spark indispensable to realizing the Enterprise of Things.”
2018-11-16 00:00:00 Read the full story.

 

Smart Data Is Changing Multi-Board System Design

Most electronic items on the market today will have multiple, interconnected printed circuit boards and the process of achieving this is not an easy one. To ensure a product is properly functioning, it is important to bring these boards together inside the enclosure and ensure that they are correctly connected to each other. This can be the most challenging part of any product development process.

Just a few of the big questions you need to ask yourself include whether the nets have been assigned correctly on each connector, whether the connector are oriented right, checking to make sure that the plugs fit together and whether all of the connected boards fit in the enclosure. As you can see, there is a lot of challenges that a designer must overcome when designing systems that feature multiple boards. Not getting this right and making mistakes late in the product development process can have huge consequences and be an extremely costly mistake as the product will then have to be redesigned and this will also cause a delay in getting the item onto the market to start selling and generating profit.

2018-11-13 18:41:31+00:00 Read the full story.

 

Marketers claim creative quality is being tarnished from digital advertising growth

Marketers are struggling to get the balance right between great creative and data-driven digital advertising. According to a new report from adtech company Sizmek, 67 per cent of marketers believe that the “digital growth in advertising has come at the expense of the quality of creative”. The study, Marketers Survey Results 2018: An Insider’s Look at Creative Quality, Personalisation, and DCO is based on a survey of more than 500 senior brand-marketers across Europe and the US.

The study found 91 per cent of marketers are prioritising the need to make digital ads more engaging over the next year to meet their brand goals. When considering the impact of Artificial Intelligence (AI), 84 per cent said the technology is useless without the right creative input; believing data alone isn’t enough to support marketers. Ninety-one per cent also believe that the creative input is as important as the use of data in digital campaigns.

Further emphasising the importance of data, the clear majority see the GDPR as a necessary introduction in relation to quality of creative. From the report, 79 per cent said the quality of creative will become even more important due to the regulation, despite the intense industry focus on data and privacy concerns.

2018-11-15 13:03:44+11:00 Read the full story.

RANKED: The 50 most underrated colleges in America

Whether a student is more inclined to learn about artificial intelligence or take up photography, several factors play a role in choosing the right college. Business Insider has compiled a ranking of the top 50 underrated colleges nationwide by considering two factors: reputation and future earnings. We figured that schools with mediocre or obscure reputations but whose students made high salaries would be underrated.

To determine our ranking, we combined the mutually exclusive sets of Best Universities and Best Liberal Arts Colleges from The US News & World Report’s annual college ranking. We pulled data on median earnings for students from each school who were working and not enrolled ten years after starting at the college from the US Department of Education’s College Scorecard. You can read the full methodology here.
2018-11-17 00:00:00 Read the full story.

 

Data Literacy and the Colin Powell Rule: From Frontline Field Support to Back Office Operations

Colin Powell famously said that leaders should make critical decisions based on a defined zone of uncertainty. Acting with less than 40% of the data needed is reckless; waiting for more than 70% of the data may be fatal. Balancing time and certainty is a challenge for all organizations. Uncertainty is systemic in all business, and is not eliminated by policy, process or controls. The driver of uncertainty is unexpected change—in markets, competitors, society, technology, workforce, expectations, values—you name it. Businesses face change on a daily basis, and they must respond quickly.

Businesses cope with change, and the resulting uncertainty, through analytics. They study the situation, do root cause analysis, assess alternatives, and implement a response. The faster they can assemble enough data to do an analysis, the faster they can make a decision and respond. Per the Powell rule, they are always balancing speed and certainty. It’s not just the right decision, it’s the right decision in time.

2018-11-16 00:35:38-08:00 Read the full story.

 

Navigating Changing Global Regulation with Artificial Intelligence – IBM – Video (5m 30)

Marc Andrews, Vice President of Watson Financial Services, IBM, speaks at Sibos 2018 in Sydney about how the banking industry is coping with the increasing burden of regulatory demand, how beneficial AI is to managing regulation and how AI led solutions help banks to manage risk and financial crime.

2018-11-14 15:34:00 Read the full story.

 

Machine Learning Moves Into Fab And Mask Shop

Semiconductor Engineering sat down to discuss artificial intelligence (AI), machine learning, and chip and photomask manufacturing technologies with Aki Fujimura, chief executive of D2S; Jerry Chen, business and ecosystem development manager at Nvidia; Noriaki Nakayamada, senior technologist at NuFlare; and Mikael Wahlsten, director and product area manager at Mycronic. What follows are excerpts of that conversation.
2018-11-16 15:00:08+00:00 Read the full story.

 

The Evolution of Remote Sensing: Delivering on the Promise of IoT

Remote sensing technology has come a long way since Gaspard Felix Tournachon’s pioneering aerial photographs in 1858. While the technology has advanced, there is one aspect of remote sensing that has remained relatively unchanged: the frequency of data acquisition. Now, with the advent of new acquisition platforms, smaller and more efficient sensors, as well as forces like cloud computing, remote sensing is again on the precipice of significant innovation. We are moving beyond the traditional interpretation of remote sensing as an aerial mapping, GIS and earth observation discipline to something that resembles another recent development, the industrial Internet of Things (IoT). Interconnected devices on converged platforms streaming data on a continuous basis will paint a new vision of the world in which we live.

Since those early days, more than 160 years ago, when Tournachon leaned over the side of a hot air balloon to photograph the landscape, the practice of remote sensing for mapping purposes has evolved tremendously – from human interpretation of aerial images to measurements using digital photogrammetry; from the evolution of visible imagery to spectral imagery; and from photogrammetry to lidar (light detection and ranging), which relies on pulsed laser light to make 3-D digital representations of a target area.
2018-11-13 00:00:00 Read the full story.

 

The Essentials for Covering Silicon Valley: Burner Phones and Doorbells

Reporting on secretive technology companies sometimes means finding people who don’t want to be found. Jack Nicas, who covers Apple, relies on some old-school methods.

How do New York Times journalists use technology in their jobs and in their personal lives? Jack Nicas, a technology reporter for The Times in San Francisco, discussed the tech he’s using.
2018-11-14 00:00:00 Read the full story.

 

3 Ways AI and Robotic Process Automation Will Improve Life Settlement Transactions

The U.S. life insurance industry is beginning to understand the vast potential benefits of robotic process automation (RPA) and artificial intelligence (AI). These two related breakthrough technological innovations leverage the power of machine learning to increase productivity and reduce the risks associated with human error.

Of course, many professionals in our industry find the names of these technologies unappealing, prompting skepticism from the outset. These reactions are often rooted in fear of the unknown, apprehension that is unnecessary once we understand the essence of the technologies.
2018-11-17 00:00:00 Read the full story.

 

Microsoft is helping Abbey Road RED explore the future of music recording

Abbey Road Studios, the world-famous studio that was home to The Beatles and Pink Floyd, is trying to shape the future of music creation at its first hackathon using Microsoft technology.

The London studio’s audio technology incubator, Abbey Road RED, invited around 100 developers, technologists, designers and music producers to find new ways of capturing sound and revolutionising the engineering process.

Microsoft provided artificial intelligence technology and experts for the event, which will gather feedback on how the music industry could use its cognitive services.

“I’m incredibly excited to share some of the latest Microsoft AI tools with participants in the Abbey Road RED Hackathon,” said Noelle LaCharite, Leading Applied AI DevEx at Microsoft. “Our suite of AI technology, including object detection, sentiment analysis and natural language understanding, has awesome potential for musicians, engineers, audio programmers and designers.”
2018-11-15 00:00:00 Read the full story.

 

Q&A: IBM Flash Storage Systems CTO Andy Walls

Data storage has an odd position within the IT continuum. Although widely recognized for the critical roles they play in computing infrastructures of every kind and size, storage solutions tend to receive less public attention than microprocessor and memory technologies. That is a serious oversight, especially when one considers how important storage is in crucial new use cases and workloads, including artificial intelligence (AI), machine learning, big data, advanced analytics and the internet of things (IoT).

In the following interview, Andy Walls, the CTO and chief architect of IBM’s Flash Systems storage organization, provides insights into how these and other developments fit into the current state of enterprise storage and IBM’s efforts. Walls is particularly well-suited to that task since his remarkable 3+ decade-long IBM career spans key evolutions in modern enterprise storage, many of which were sparked or extended by his own achievements.
2018-11-18 00:00:00 Read the full story.

 

Surveillance marketing: Too much personalization can hurt your brand

Personalization is often touted as a panacea in the world of marketing. An omnipotent force with the ability to recognize our desires and needs and make the world of advertising and experiences more relevant. All in the noble cause of selling more stuff. A study published in the Journal of Applied Psychology found that personalized ads attract more attention and last longer in the memory. Salience and mental availability are fundamental to advertising success. So all good?

Well, there is also research suggesting that as consumers learn more about how advertising personalization works, they like it less. A recent YouGov study found that 45 percent of UK consumers are against their data being used for personalization of information, services, and advertising, and 54 percent find personalized advertising creepy.
2018-11-18 00:00:00 Read the full story.

 

AEye raises $40 million for sensor that merges camera and lidar data

With some analysts predicting as many as 10 million self-driving cars will hit the road by 2030, it’s no wonder the lidar market is projected to be worth $1.8 billion in just five years. Lidar — sensors that measure the distance to target objects by illuminating them with laser light and measuring the reflected pulses — form the foundation of a number of autonomous car systems, including those from Waymo and Uber.

But not all lidars are created equal — or so argues AEye, a five-year-old San Francisco startup with an innovative sensor technology it claims can exceed the range and scan rate of traditional lidar. Today the startup announced a $40 million Series B funding round led by Taiwania Capital, with participation from heavy hitters like Intel Capital, Airbus Ventures, and Tyche Partners, in addition to undisclosed “multiple global automotive OEMs, Tier 1s, and Tier 2s” to be announced at the Consumer Electronics Show in January. This round brings AEye’s total raised to just over $61 million.
2018-11-19 00:00:00 Read the full story.

 

Dell Looks at the Future, and Frankly, It Can Be Scary

CHICAGO—This week I’m in lovely (and frickin’ cold) Chicago at Dell’s annual analyst event, and Michael Dell is on stage talking about things like “bionic vision.” Recalling the old TV show, “The $6 Million Man,” this technology is coming out of England for folks who have catastrophic vision loss. This is kind of a primer on what is going on with regard to applied data analytics.

From Dell’s viewpoint—and he is hardly an outlier here—we are facing an ever-growing tsunami of data, and the related problems and opportunities are massive. He argues that Dell, at its current massive scale, is the only company that can stretch from the edge, embrace the cloud and encompass the core (which bridges on-premises and the multicloud world) to prepare firms for that future.
2018-11-14 00:00:00 Read the full story.

 

How automotive supplier Valeo wants to accelerate autonomous vehicle development

The Valeo Drive4U demo was yet another milestone in those efforts to recast its culture and image to embrace innovation. While only for demonstration purposes, the car was packed with Valeo parts such as ultrasonic sensors, cameras, laser scanners, and radars. The car also incorporated Valeo’s artificial intelligence systems for processing the data.

As a result, the car could navigate the streets of Paris while detecting and reacting to intersections, traffic lights, cyclists, and pedestrians. Critically, all these parts exist today, and the car could operate at Level 4 autonomy (a human driver is present, but there is almost never a need to intervene). “The point was to show to our customers that we were able to operate at Level 4,” Devauchelle said. “Paris downtown is maybe one of the challenging driving environments.”
2018-11-19 00:00:00 Read the full story.

 

VR is leading us into the next generation of sports media

…For the athletes themselves, they’ll no longer be at the mercy of fate, and hope to make the dynamic play that catches a scout’s eye when they happen to be there in the stadium watching. The potent combination of streaming video technology and machine learning will allow the cream to more easily rise to the top, as innovation will allow teams to cull the most relevant game action and training footage in order to discover top prospects and develop them into the next legends of the game…
2018-11-16 00:00:00 Read the full story.

 

 

 

Refinitiv Takes First Step to Data Marketplace with Cloud-Based QA Direct • Integrity Research

Refinitiv, formerly the Financial and Risk business of Thomson Reuters, recently announced it was launching its QA Direct data platform in the cloud based on Microsoft Azure. The move of QA Direct to the cloud enables quantitative investors to access Refinitiv’s extensive database, plus third-party data. QA Direct in the Cloud is a new product offering which builds upon the Refinitiv’s existing QA Direct hardware-based data platform, which provides a scalable platform to manage, maintain and integrate quantitative analysis and investment data.

The new QA Direct in the Cloud platform will eventually deliver 9 terabytes of entitled Refinitiv data, the largest repository of cloud-based financial data, a unified symbology, quantitative alpha models and other third-party content sets allowing customers to explore various investment hypotheses and implement proprietary quantitative strategies.
2018-11-19 02:00:51+00:00 Read the full story.

 

BlueData and H20.ai Collaborate to Accelerate AI and Machine Learning Deployments

BlueData, provider of container-based software, and H2O.ai, an open source leader in AI, will partner to advance AI and machine learning (ML) deployments. The collaboration includes integration of H2O.ai’s full suite of products – including open source H2O, H2O Sparkling Water for machine learning with Spark, and the automated H2O Driverless AI – with the container-based BlueData EPIC software platform.

BlueData and H2O.ai share many joint customers across multiple industries – including industry-leading organizations like Barclays, Citi, GM Financial, Optum, Macy’s, Seattle Children’s, and SCL Health. The BlueData-H2O.ai partnership will enable these and other enterprise customers to realize the full potential of AI / ML, helping them to make mission-critical business decisions and deliver data-driven innovation.

“Our mission is to democratize AI by putting automatic machine learning in hands of data scientists and engineers,” said Sri Ambati, CEO and founder of H2O.ai.
2018-11-14 00:00:00 Read the full story.

 

The future of data is streaming: What to expect in next 10 years?

…Neural networks, deep learning and predictive decisioning algorithms all rely on large-scale stream processing, identifying trends and outliers among thousands or millions of similar data events.

While here too there is plenty of hype to go around, few experts would disagree that these technologies are going to play a major part in both industry and science in the coming decade. As AI and ML enter the mainstream, we are likely to see increasing demand for tools and skilled personnel for capturing, processing and structuring streaming data (hence the oft-cited data scientist shortage).
2018-11-15 15:38:29+00:00 Read the full story.

 

The edge is forcing us to rethink data centre management

…With a cloud-based solution, you can collect and analyse massive amounts of data, providing greater opportunities to perform benchmarking and improve performance. Leveraging big data analytics and machine learning, organisations will be able to spot trends, predict failures, make data-driven decisions and uncover insights to optimise operations.

Cloud-based systems also mean that your edge centres can be easily deployed, standardised and scaled up in a cost-effective way. It’s possible to remotely monitor and manage all of your sites from a single location, either from a mobile device or laptop, with real-time data visibility and alerts.
2018-11-16 00:00:00 Read the full story.

 

Selected CIO Strategies for AI Business Success

Your AI program should have short-term and long-term goals and milestones. Here’s how CIOs can reset executive expectations and deliver results with their artificial intelligence programs.
IT executives often find themselves in a balancing act to achieve two things for business leaders — quick IT project success, perhaps a proof-of-concept project, and then also preparing the larger organization for the long-term potential and value of an emerging technology.

Artificial intelligence and all the technologies that go with it fall into this category. Your IT organization may be at the start of the journey with some initial projects. Or you may be working to integrate the value of AI more deeply into your organization’s technology and process infrastructure. Yet you also know that the biggest value will come from long-term investments in this emerging field.
2018-11-13 15:10:00+00:00 Read the full story.

 

The Growing Importance of Big Data in the Pharmaceutical Industry

My colleagues have raised many valid points about the evolving role of big data in the healthcare industry. Most of the focus is on the role of big data in healthcare delivery at hospitals and clinics. However, there is a very important reason that big data is needed in the pharmaceutical industry as well.

A 2013 report from James Cattell at McKinsey states that big data can add $100 billion in value to the pharmaceutical industry by improving research and development. However, there are other benefits of big data that have received less attention. These include the importance of using predictive analytics to identify opportunities in the market. They might find that predictive analytics algorithms may help them identify demographic factors that will help them see how the increase of certain diseases will rise over a given time frame.
2018-11-14 18:17:13+00:00 Read the full story.

 

Broadridge and Tableau Deliver Enhanced Analytics

Broadridge Financial Solutions, Inc. (NYSE:BR), a global Fintech leader and recent addition to the S&P 500® Index, has partnered with Tableau Software, delivering enhanced analytics and visualization capabilities to Broadridge’s investment management clients. Clients will have seamless integration between the Broadridge Investment Management Data Warehouse and Tableau to provide self-service access to individualized analytics, customizable digital reports and interactive dashboards.

Tableau’s commitment to developing the broadest and deepest analytics platform gives Broadridge clients access to emerging technologies with capabilities like smart data recommendations powered by machine learning, and natural language processing, which enables a more natural, conversational way to ask questions and gain insights from data. By integrating Broadridge’s industry-leading solutions with Tableau, Broadridge clients can easily explore trading and portfolio data, while intuitive, visual analytics enables them to seamlessly showcase their findings and empower more people with data-driven results.
2018-11-16 19:03:29+00:00 Read the full story.

 

The Future of Wealthtech: Artificial Intelligence, Machine Learning, and Big Data Analytics

Wealth management industry is swiftly changing with the emergence of new business models and modern technology. Artificial intelligence (AI), machine learning (ML), and big data are spearheading this dynamic evolution and fostering the growth of wealthtech.

More and more players are employing these tools in order to simplify business processes and help clients reach sound financial decisions through personalized advice. Indeed, this practice is linked to incredible benefits: it saves humans a wealth of time and ultimately, improves the investment outcomes.
2018-11-17 17:00:25+00:00 Read the full story.

 

Datawatch Angoss Simplifies Data Science and Analytic Tasks on the Apache Spark Platform

“Datawatch Corporation today announced the general availability of Datawatch Angoss KnowledgeSTUDIO for Apache Spark, enabling organizations to act more confidently with their data and rely on consistent, trustful results in making better business decisions. In combination with its market-leading data visualization approach for building, exploring and segmenting data using patented Decision Tree technology, Datawatch Angoss enables data science teams to create predictive analytic models using Apache Spark by means of a drag-and-drop / point-and-click interface.”
2018-11-15 00:15:43-08:00 Read the full story.

 

Kyvos Insights Allows Businesses to Seamlessly Scale Business Intelligence in the Cloud

Kyvos Insights, a big data analytics company, today announced the availability of Kyvos Version 5, providing businesses with a cloud native way to elastically scale and draw intelligence from exponentially growing data workloads. The platform update delivers enterprise-class OLAP with instant response times and unlimited scalability, allowing enterprises to glean critical insights from today’s largest data sets while also enabling real-time queries to be run on data as soon as it arrives. The new platform updates have been designed to better allow business users to gain real-time intelligence from ever increasing data sets. Developed specifically for the cloud, the updates expand upon the commitment to scalability inherent in the Kyvos platform by introducing elastic scalability for querying, allowing businesses to scale up and down more seamlessly as data loads change. This elasticity also allows for improved segmentation of data workloads within organizations, meaning different business units can more readily run queries without slowing down other teams.”
2018-11-19 00:15:38-08:00 Read the full story.

 

SnapLogic Introduces Self-Service Solution for Machine Learning

SnapLogic, provider of an integration platform, has announced SnapLogic Data Science, a new self-service solution to accelerate the development and deployment of machine learning with minimal coding.

SnapLogic Data Science allows data engineers, data scientists, and IT/DevOps teams to manage and control the entire machine learning lifecycle – including data acquisition, data exploration and preparation, model training and validation, and model deployment – all from within the SnapLogic integration platform. The solution helps break down traditional barriers that can undermine machine learning initiatives by providing a common platform for machine learning visibility and collaboration across teams.
2018-11-14 00:00:00 Read the full story.

 

Business AI: Use cases and applications of AI in the corporate sector

The role of artificial intelligence in business is becoming more and more prominent, and the number of organizations that have enabled the technology at the enterprise level is increasing. Indeed, implementing this powerful technology has numerous benefits, including making faster and more informed decisions, increasing operational efficiency, creating new innovative products and services, and achieving many more others.

The desired business outcomes expected from the application of AI are also diverse. According to a survey of attendees of the EmTEch Digital Conference performed by Ernst & Young, the top three included improving or developing new products and services. As well as achieving cost efficiencies and streamlined business operations, and accelerating decision-making.
2018-11-17 05:44:43+00:00 Read the full story.

 

Here Are the 20 Best Data Analytics Software Tools for 2019

The marketplace for the best data analytics software is mature and crowded with excellent products for a variety of use cases, verticals, deployment methods and budgets. Traditional business intelligence providers continue to offer dashboard and reporting capabilities that have remained staples to the market since widespread adoption of data analytics began more than a decade ago. Disruptive newcomers are bringing new technologies to the table so that organizations can take full advantage of data.

There are very large providers we refer to as ‘mega-vendors’, like Microsoft, Tableau, Qlik, SAP, and IBM. There are also lesser-known innovators with interesting products that play in niche areas, such as ThoughtSpot,, Pyramid Analytics, and ClearStory Data. In an attempt to assist you with what can become a daunting task of selecting the right product, these are the top-20 best data analytics tools for 2019.
2018-11-16 15:20:25+00:00 Read the full story.

 

Trifacta Extends Data Preparation to DataOps with New Functionality for Data Engineers

Trifacta, the global leader in data preparation, today announced a new set of capabilities designed to enable data engineers within data operations (DataOps) practices to more efficiently develop, test, schedule and monitor data preparation pipelines in production. With RapidTarget, Trifacta customers can utilize an existing data model to intelligently guide transformations and accelerate the process of generating a new  output dataset that matches the predefined schema. Automator provides end-to-end management of scheduling, monitoring and refining preparation workflows. These new features, along with the new Deployment Manager framework, facilitate the work of data engineers in administering data pipelines that feed analytics, machine learning and data science initiatives.”
2018-11-15 00:10:57-08:00 Read the full story.

 

Databricks, Talend Expand Cloud Access to Spark

Databricks and Talend, the cloud data integration vendor, are joining forces to help data jockeys scale their integration efforts using the Apache Spark analytics engine hosted on Talend’s cloud. Databricks, the creators of Apache Spark, and Talend (NASDAQ: TLND) said integration between the cloud service and Databrick’s analytics platform would enable data engineers to leverage the cluster computing framework for processing large data sets at scale. The integration would replace manual coding with a drop-and-drag interface, the partners said Thursday (Nov. 15).
2018-11-15 00:00:00 Read the full story.

 

Veritas Technologies Creates New Solution that Uses AI and Machine Learning

Veritas Technologies, provider of enterprise data protection, is unveiling Veritas Predictive Insights, a new solution that utilizes artificial intelligence (AI) and machine learning (ML) algorithms to deliver always-on proactive support. Utilizing years of encrypted event data from thousands of Veritas appliances, Veritas Predictive Insights’ cloud-based AI/ML Engine monitors system health, detects potential issues, and creates proactive remediation before problems can occur. Predictive Insights also enhances Veritas product availability and customer satisfaction by helping businesses reduce unplanned downtime, ensure faster time to fault resolution, and lower overall total costs of ownership.
2018-11-16 00:00:00 Read the full story.

 

Navigating through an increasingly complex data maze

While producers of data platforms and services are in a period of incredible growth and change, the stakes are enormous, says Raghu Ramakrishnan, CTO for Data at Microsoft. Data analytics is an integral part of how organisations plan and execute, and it is rapidly becoming ever more entrenched. Organisations are becoming more data-centric – both in gathering data to base decisions on, and in becoming more data-driven in how they make decisions and execute on them. Despite the new opportunities opened up by this new data dawn, businesses need the right tools to navigate its landscape.
2018-11-15 00:00:00 Read the full story.

 

Kyvos Version 5 Delivers New Enhancements to Scale Growing Workloads

Kyvos Insights, a big data analytics company, is releasing Kyvos Version 5, delivering enterprise-class OLAP with instant response times and unlimited scalability. The update provides businesses with a cloud native way to elastically scale and draw intelligence from exponentially growing data workloads, allowing enterprises to glean critical insights from today’s largest data sets while also enabling real-time queries to be run on data as soon as it arrives.

Developed specifically for the cloud, the updates expand upon the commitment to scalability inherent in the Kyvos platform by introducing elastic scalability for querying, allowing businesses to scale up and down more seamlessly as data loads change. This elasticity also allows for improved segmentation of data workloads within organizations, meaning different business units can more readily run queries without slowing down other teams.
2018-11-16 00:00:00 Read the full story.

 

New ThoughtSpot 5 Includes Voice-Driven SearchIQ Analytics

Among the many new features in ThoughtSpot 5 is a new voice-driven analytics engine called SearchIQ. The analytics software maker said the latest version, unveiled Nov. 14 at the company’s user conference, lets people use their own voice and “casual language” to ask the system to analyze data from customers and a wide range of other sources such as product sales, returns and human resources. ThoughtSpot produces business-intelligence analytics search software. The company is based in Palo Alto, Calif., with additional offices in London and Seattle. ThoughtSpot said it is using advances in machine learning, artificial intelligence and automation to offer a solution designed to help companies analyze the current state of their business but also predict what’s going to happen next. “What we’re trying to do is solve the problem of data access from a human angle,” Ajeet Singh, co-founder and executive chairman of ThoughtSpot, told eWEEK. “An HR administrator is not going to know how to use data visualization tools and shouldn’t have to.”
2018-11-15 00:00:00 Read the full story.

 

Duco Continues Global Growth: Opens New Office in Singapore

“At Duco we believe in doing things a little differently,” said Victoria Harverson, Business Development Director, Asia Pacific. “We are focused on leveraging machine learning and self-service technology to solve long standing data problems that have traditionally suffered from manual and legacy point solutions. Customers love our enterprise-ready SaaS platform and are looking to Duco to be the data integrity and control point in their future architecture. Our new office will support our continued expansion across Asia, ensuring clients get the best possible service and support from Duco. We look forward to joining and working with the Singapore Fintech community – especially those that are as focused as Duco on innovation through machine learning and artificial intelligence.”
2018-11-15 00:00:00 Read the full story.

 

DarwinAI Announces Explainability Platform for Neural Network Performance

“DarwinAI, a Waterloo, Canada startup creating next-generation technologies for Artificial Intelligence development, today announced the next milestone in its product roadmap with the release of its explainability toolkit for network performance diagnostics. Based on the company’s Generative Synthesis technology, this first iteration of the tool provides granular insights into neural network performance. Specifically, the platform provides a detailed breakdown of how a model performs for specific tasks at the layer or neuron level. This deep understanding of the network’s components and their involvement in specific tasks enables a developer to fine-tune the model designs for efficiency and accuracy. The introduction of explainability comes two months after the company announced its emergence from stealth, its Generative Synthesis platform, and $3 million in seed funding, co-led by Obvious Ventures and iNovia Capital, as well as angels from the Creative Destruction Lab accelerator in Toronto.”
2018-11-15 00:05:21-08:00 Read the full story.

 


Behind a Paywall/Registration wall…

 

Big Data Sourcebook: Data Lakes, Analytics and the Cloud

The world of data management has changed drastically – from even just a few years ago. Data lake adoption is on the rise, Spark is moving towards mainstream, and machine learning is starting to catch on at organizations seeking digital transformation across industries. All the while, the use of cloud services continues to grow across use cases and deployment models. Download the sixth edition of the Big Data Sourcebook today to stay on top of the latest technologies and strategies in data management and analytics today.
2018-11-15 00:00:00 Read the full story.

 

Distributed Data Analytics: Strategic Advantage at the Edge

Gaining the advantage in the years to come means going to the edge. Businesses are discovering their future lies in the ability to leverage strategic edge analytics, now possible through the surge in compute intelligence closer to where data is created, leveraging volumes of data being generated through interactions with cameras, sensors, meters, smartphones, wearables, and more. In conjunction, processor, storage and networking capabilities to support local embedded analytics on these devices and across them through peer-to-peer interactions (on local or nearby mezzanine or gateway platforms) is also increasing.

This explosion of intelligence at the networks’ edge is often termed the Internet of Things (IoT). The value in the IoT revolution is much more than simply connecting devices and downloading data—it means unfolding systems into global networks connecting companies more intimately with customers, partners, employees, and other constituencies. Strategic edge analytics makes this a reality, without the complexity and latency seen with the centralized hub-and-spoke approaches seen thus far with IoT.
2018-11-14 00:00:00 Read the full story.

 

Getting Started with Deep Learning on Apache Spark™

Deep learning is driving rapid innovations in artificial intelligence and influencing massive disruptions across all markets. However, leveraging the promise of deep learning today is extremely challenging. The explosion of deep learning frameworks is adding complexity and introducing steep learning curves. Scaling out over distributed hardware requires specialization and significant manual work; and even with the combination of time and resources, achieving success requires tedious fiddling and experimenting with parameters.
2018-11-15 00:00:00 Read the full story.

 


This news clip post is produced algorithmically based upon CloudQuant’s list of sites and focus items we find interesting. If you would like to add your blog or website to our search crawler, please email customer_success@cloudquant.com. We welcome all contributors.

This news clip and any CloudQuant comment is for information and illustrative purposes only. It is not, and should not be regarded as “investment advice” or as a “recommendation” regarding a course of action. This information is provided with the understanding that CloudQuant is not acting in a fiduciary or advisory capacity under any contract with you, or any applicable law or regulation. You are responsible to make your own independent decision with respect to any course of action based on the content of this post.