stock charts

AI & Machine Learning News. 06, August 2018

Amazon’s Echo Look fashion assistant lacks critical context

Amazon’s Echo Look is an Alexa fashion assistant that combines human and machine intelligence to tell you how you look in an outfit, keeps track of what’s in your wardrobe, and recommends clothes to buy from Amazon.com.

Made generally available to the public in recent weeks, the Echo Look debuted in April 2017, but was available by invite only for more than a year — a first for Alexa-enabled devices. Over time, Amazon will team Echo Look with Prime Wardrobe, an Amazon program akin to modern fashion companies like Stitch Fix and Trunk Club that lets users try on clothes and send back what they don’t want to buy. All the meanwhile, Amazon’s facial recognition software Rekognition keeps making headlines for being used by U.S. law enforcement agencies and misidentifying more than two dozen members of Congress as criminals.

Let’s examine why it can be a lot of fun to use the Echo Look, why it took Amazon a year to make the device generally available, and why its fashion assistant’s AI is inherently biased.
2018-08-03 00:00:00 Read the full story.

CloudQuant Thoughts… Style Check Reasons explains why the AI chose one outfit over another. “It started out highly human and will become more and more machine, but as one might expect with fashion trends constantly changing, there will always be some human engagement in this keeping track of what styles are now in and what’s changing in fashion”. Again we see products where the AI cannot quite get the desired result (Autonomous cars, Autonomous robots) but we can put a human in at the sticking point as a placeholder, launch a product and – as the AI gets better – slowly nibble away at the human element. Watch this AI fashion area to expand rapidly. After all, Amazon have patented a mirror that helps you dress!

 

 

The dawn of AI marketing is here: You can do it too!

Coming into this year, we were surprised to find no AI events for the marketing or growth agenda, so we planned Transform in San Francisco on August 21 and 22, focused exclusively on real business results and applications for marketing, product, and growth engineer executives.

We’re excited because Transform has some amazing speakers — and case studies with real results. There’s so much hype around AI that it starts to sound like voodoo, or something only achievable if you’re Google, Amazon, or Microsoft and therefore able to hire data scientists with salaries of starting NFL quarterbacks. But after talking with and studying upstarts like Stitch Fix and Hopper, we realized just about any company can avail themselves of AI technology. Our motto for this event became: “You can do it too!”.
2018-08-03 00:00:00 Read the full story.

CloudQuant Thoughts… I can see my elevator pitch for Netflix : “Think Madmen meets AI”

 

Bringing Back the Mighty Chiplet

Big things can often fit into small packages, especially if those packages are tightly bound. The concept of a specialized chiplet is as about as old as the microprocessor but due to more demands for ever more processing with power efficiency and high bandwidth the idea is being revived.

This reinvigoration of chiplets comes at a time when U.S. research arm DARPA is refreshing investment in novel architectures and approaches to processing growing data volumes. The agency’s Electronics Resurgence Initiative (ERI), is a five-year, $1.5 billion program to overcome Moore’s Law hurdles and develop new microelectronics that can fit into a wide range of government analytics and IT systems projects. Chip giant Intel took part in the recent ERI Summit where industry and academia came together to sift through ideas for the post-Moore’s Law era of system design, including a discussion of how specialized chiplets might be the cure to some of DARPA’s woes.
2018-07-31 00:00:00 Read the full story.

CloudQuant Thoughts… We are all used to the idea of offloading our graphical needs to a GPU and now the abilities of these GPUs to do parallelized processing makes them ideal for AI/ML tasks. But we are seeing more and more secondary processors on the mainboards of consumer products. Apple’s T2 chip which made its way into the new MacBooks recently is a perfect example of a Tech firm utilizing an industry standard CPU for the main processing and a secondary custom private Processing Unit for other tasks to stand out from their competition.

 

AWS now has $16 billion of unrecognized revenue

Amazon said in its quarterly report that AWS has at least $16 billion in backlog revenue, up from the previous quarter’s $12.4 billion. The average remaining life of those contracts also extended from 3.2 years to 3.5 years in its most recent quarter. It’s the latest sign of AWS signing more long-term contracts, as opposed to customers paying by the hour.

Amazon’s cloud service is locking its customers into bigger and longer-term contracts, signaling a deeper commitment from its already market-leading user-base.

Amazon disclosed in its latest quarterly report that it had $16 billion in backlog revenue for Amazon Web Services, up from the previous quarter’s $12.4 billion. The average length of those remaining contracts also extended from 3.2 years to 3.5 years in its latest quarter.

Backlog revenue is a non-balance sheet item that represents the total value of future contract obligations. Amazon describes it as “commitments in customer contracts for future services that have not yet been recognized.” The remaining balance gets recognized as revenue once the service is billed and delivered.
2018-08-01 00:00:00 Read the full story.

CloudQuant Thoughts… It would be interesting to see if they can split this between Web Serving and Data Research uses.

 

Customers buy homes – they don’t buy mortgages

Buying a house is one of the biggest commitments most people ever make – both financially and emotionally. And big usually implies complex. But does that mean buying a mortgage also needs to be complex? Why can’t it be made simpler like other kinds of loans? Why should a customer’s experience while availing a mortgage be any different from the experience of seeking a personal or auto loan? What would it take to make it possible to simply buy the house on Amazon and take delivery of the keys next day from a drone?
2018-08-03 00:00:00 Read the full story.

CloudQuant Thoughts… Earlier this year, Rocket Mortgage helped Quicken move into the top spot for Mortgages in the US ahead of Wells Fargo. Bank of Queensland in Australia has radically revamped its home loan business, reducing their “Time to Yes” by 99% while also reducing “Total Touch Time” by 85%. Online banking, Tax returns, and Investment means that most of us no longer need to search for old payslips, Rocket Mortgage offer a “Mortgage in less than 8 minutes” but this is not the end for Mortgage disruption. I purchased my first house with an Australian style mortgage which included automatically escalating mortgage payments (how many of us are paid the same at the start of our mortgage as at the end?) and a no penalty early payoff. US mortgages still have a long way to go and there is still plenty of opportunity for tech-driven disruption.

 


Below the Fold…

 

Facebook’s chief AI scientist says that Silicon Valley needs to work more closely with academia to build the future of artificial intelligence

Facebook’s chief AI scientist, Yann LeCun, says that letting AI experts split their time between academia and industry is helping drive innovation.
Writing for Business Insider, the executive and NYU professor argues that the dual-affiliation model Facebook uses boosts individual researchers and the industry at large.
A similar model has historically been practiced in other industries, from law to medicine.
2018-08-10 00:00:00 Read the full story.

 

Tech, Big Data, and the Capital Markets

“I met some people working on financial technology. They were really cool and interesting people solving hard problems and I wanted to join them. That was how I made my career decision.”

In this video from MarketsWiki Education’s World of Opportunity event in New York, Mike Beller, CEO of Thesys Technologies, talks about the electronification of markets and the explosion of trading venues it created. Beller says managing the mass amounts of data generated by this phenomenon is the industry’s biggest challenge.
2018-07-30 14:59:55+00:00 Read the full story.

 

Data breaches: app security under threat

In July news broke that a person’s data on a well-known mobile payment service app could be seen publicly (Venmo… it was Venmo!).

In case you missed it, a researcher analysed over 200 million publicly available transactions made using the money-sharing app. Her aim was to draw attention to the amount of information that can be gathered using peer-to-peer apps. She was able to access the data through a public application programming interface, even those who had set their setting to private, and build a picture of their lives with surprising accuracy. From burgers to cannabis oil, if you bought it she knew about it.

Her bid to out the ways peer-to-peer apps worked was used to highlight that some people place more trust than they should in the default settings of all types of apps.
2018-08-02 00:00:00 Read the full story.

 

5 Resources to Inspire Your Next Data Science Project

We are exposed to seemingly endless streams of data science career advice, but there is one topic that doesn’t get quite enough love: side projects. Side projects are awesome for plenty of reasons, but I like how Julie Zhuo puts it in the simple venn diagram below:

Side projects serve as a way to apply data science in a less goal-driven environment than you probably experience at work or school. They offer an opportunity to play with data however you want, while learning practical skills at the same time.

Aside from being a lot of fun and a great way to learn new skills, side projects also help your chances when applying for jobs. Recruiters and managers love to see projects that show you’re interested in data in a way that goes beyond classes and employment.
2018-08-03 12:54:27.800000+00:00 Read the full story.

 

New Apple Policies Threaten Alternative Data

New Terms of Service required of developers marketing their products via the Apple App Store have introduced heightened legal risks to using data collected by iOS apps – an issue that could challenge certain alternative data vendors and the asset managers who use this data. These new policies have been implemented by Apple in the wake of Facebook’s recent privacy scandals.

In the past few months Apple has changed its App Store Terms of Service for App Developers to better address new privacy laws like GDPR, which was implemented in May 2018, as well as the growing concerns of individuals about the privacy of their data collected by these apps. In the wake of these changes, Apple has recently started to remove third-party apps from their App Store for violating the new terms.

The most concerning changes for the alternative data industry are found in Section 5.1 of the App Store Review Guidelines. These changes can be summarized as follows:

  • App developers must obtain consent from users to collect their data. In addition, users must be informed how and where their data is being used.
  • Data collected for one purpose may not be repurposed without additional user consent.
  • Data collected by apps sold through Apple’s App Store may only be used for two reasons, to improve the app or to support the serving of advertising.

2018-08-06 02:15:09+00:00 Read the full story.

 

AutoKeras: The Killer of Google’s AutoML

Will Google’s Auto ML win the AI ML game. Many say yes, but at $20 an hour others are betting on free OpenSource AutoKeras to defeat the Goliath. Do we want the best AI/ML knowledge out in the open or hidden behind a “Wizard of Oz” curtain?
2018-08-05 19:59:12.213000+00:00 Read the full story.

 

Understanding Data Science Classification Metrics in Scikit-Learn in Python

In this tutorial, we will walk through a few of the classifications metrics in Python’s scikit-learn and write our own functions from scratch to understand the math behind a few of them.

One major area of predictive modeling in data science is classification. Classification consists of trying to predict which class a particular sample from a population comes from. For example, if we are trying to predict if a particular patient will be re-hospitalized, the two possible classes are hospital (positive) and not-hospitalized (negative). The classification model then tries to predict if each patient will be hospitalized or not hospitalized. In other words, classification is simply trying to predict which bucket (predicted positive vs predicted negative) a particular sample from the population should be placed as seen below.
2018-08-05 19:43:11.811000+00:00 Read the full story.

 

Drake — Using Natural Language Processing to understand his lyrics

We know for a fact that Drake’s work is popular but why are the majority of his songs such a hit? Is it the production? Is it the marketing? It is probably a combination of factors. However, the aspect I will be focusing on is his lyrics. Drake’s work is expansive and well-documented, so getting text data was not a difficult task. However, figuring out how to analyze it was. But thanks to recent improvements in NLP (Natural Language Processing), analyzing text data is now easier than ever.
2018-08-04 21:54:49.264000+00:00 Read the full story.

 

The Best Machine Learning GitHub Repositories & Reddit Threads from July 2018

 

GitHub Repositories

  • Image Outpainting
  • Text Classification Models with TensorFlow
  • MatchZoo
  • GANimation
  • GAN Stability

Reddit Discussions

  • Which deep learning papers should I implement to learn?
  • Use of Science at Organizations like Google Brain/FAIR/DeepMind
  • Some Good Books to Gain a Theoretical Understanding
  • Discussion on how AI will Impact Jobs, both Present and in the Future
  • Common Mistakes People make in Data Visualization

2018-08-01 22:36:07+05:30 Read the full story.

 

Intake: Taking the Pain out of Data Access – New Anaconda Data Access Layer

Defining and loading data-sets costs time and effort. The data scientist needs to know what data are available, and the characteristics of each data-set, before going to the effort of loading and beginning to analyze some specific data-set. Furthermore, they might need to learn the API of some Python package specific to the target format. The code to do such data loading often makes up the first block of every notebook or script, propagated by copy&paste.
2018-08-02 11:05:44-05:00 Read the full story.

 

Self-supervised learning gets us closer to autonomous learning

Self-Supervised Learning is getting attention because it has the potential to solve a significant limitation of supervised machine learning, viz. requiring lots of external training samples or supervisory data consisting of inputs and corresponding outputs. Yann LeCun¹ recently in a Science and Future Magazine interview presented self-supervised learning as a significant challenge of AI for the next decade.
2018-08-06 12:31:01.376000+00:00 Read the full story.

 

Implement CRISP Data Science with AWS SageMaker

This article aims to demonstrate the capability and agility of AWS to develop and host both industry-standard machine learning products and research-level algorithms.

CRISP (Cross-industry standard process for data mining) is an agile workflow or framework that captures the separation of concerns in Data Science well. Standard CRISP includes above 7 components (Business Understanding, Data Understanding, Data Preparation, Modelling, Evaluation, Deployment, Monitoring) and 4 phases (Form business questions and collect data, Process data and build model based on it, Package model into data product and deploy, Monitor the model and collect information).

It is not uncommon to go back-and-forth in the process. For example, we could be forced to collect certain data to answer a business question, or experiment with various statistical models to lift the accuracy of the model.
2018-08-05 19:59:12.213000+00:00 Read the full story.

 

Various Optimisation Techniques and their Impact on Generation of Word Embeddings

We are a machine learning data annotation platform to make it super easy for you to build ML datasets. Just upload data, invite your team and build datasets super quick.

Word embeddings are vectorial representations that are assigned to words, that have similar contextual usages. What is the use of word embeddings you might say? Well, if I am talking about Messi and immediately know that the context is football… How is it that happened? Our brains have associative memories and we associate Messi with football…

To achieve the same, that is group similar words, we use embeddings. Embeddings, initially started off with one hot encoding approach, where each word in the text is represented using an array whose length is equal to the number of unique words in the vocabulary.
2018-08-06 10:26:01.236000+00:00 Read the full story.

 

Policy Networks vs Value Networks in Reinforcement Learning

In Reinforcement Learning, the agents take random decisions in their environment and learns on selecting the right one out of many to achieve their goal and play at a super-human level. Policy and Value Networks are used together in algorithms like Monte Carlo Tree Search to perform Reinforcement Learning. Both the networks are an integral part of a method called Exploration in MCTS algorithm.

They are also known as policy iteration & value iteration since they are calculated many times making it an iterative process. Let’s understand why are they so important in Machine Learning and what’s the difference between them?
2018-08-05 08:11:03.748000+00:00 Read the full story.

 

How to write your favorite R functions — in Python?

One of the great modern battles of the data science and machine learning is “Python vs. R”. There is no doubt that both have gained enormous ground in recent years to become top choice of programming languages for data science, predictive analytics, and machine learning. In fact, in a recent article from IEEE, Python overtook C++ as the top programming language of 2018 and R has firmly secured its spot in top 10.

However, there are some fundamental differences between these two. R was developed primarily as a tool for statistical analysis and quick prototyping of a data analysis problem. Python, on the other hand, was developed as a general purpose modern object-oriented language in the same vein as C++ or Java but with a simpler learning curve and more flexible demeanor. Consequently, R continues to be extremely popular among statisticians, quantitative biologists, physicists, and economists alike whereas Python has has slowly emerged as the top language of choice for day-to-day scripting, automation, backend web-development, analytics, and general machine learning frameworks with extensive support base and open source development community work.
2018-08-04 21:10:56.829000+00:00 Read the full story.

 

How Google’s BigQuery ML Is Empowering Data Analysts

In case you were wondering, here’s another sign of the Google Cloud Vs Amazon Web Services war heating up. Google has now brought in the big guns in the analytical data warehousing space with by embedding machine learning capabilities into Google BigQuery. Google BigQuery is an analytics service, low-cost enterprise data warehouse which has now been rebranded as BigQuery ML.

One of the key features of BigQuery is that it transforms SQL queries into complex execution plans, dispatching them onto execution nodes to promptly provide insights into the data. BigQuery enables developers to execute SQL as a massively parallel processing query with hundreds of CPU cores and ample disk storage, scanning and aggregating terabytes of data in seconds. BigQuery ML, a capability inside BigQuery enables analysts and data scientists to build and deploy ML models on massive structured or semi-structured datasets.
2018-08-04 07:55:40+00:00 Read the full story.

 

Product Overview and Analysis : SigOpt Automated Model Tuning

SigOpt’s optimization solution automates the tuning of any model built with any framework on any infrastructure to maximize the return on machine learning, artificial intelligence and general research investments. Built by experts for experts, this solution embeds an ensemble of Bayesian and global optimization algorithms within a standardized platform that is accessible through a simple REST API. This automated, scalable and comprehensive approach enables teams to tune much earlier and more often, which, in turn, helps transform traditional discrete data projects with mathematical outputs to continuously deployed products that improve business outcomes.
2018-08-03 00:00:00 Read the full story.

 

Inside Cloud AutoML: Google’s New Marketing Platform To Drive Cloud Revenue

If there is one big takeaway from the recently-concluded Google Cloud Next 2018 conference, is how the tech behemoth wants to position itself as a strong contender in the hybrid cloud business dominated by Amazon Web Services and Microsoft. This was evident in Google’s embrace of private, hybrid, edge and multi-cloud computing. Another key announcement was about the company expanding Cloud AutoML — the machine learning platform it announced at Google I/O last year — into new areas like Vision, Natural Language and AutoML Translation. This comes at a time when AWS cloud revenue increases 48.9 percent in the second quarter. As the cloud market moves into various cloud architectures, companies are getting more competitive in the face of continuous change and introducing additional artificial intelligence products to draw in more customers.
2018-08-04 07:51:14+00:00 Read the full story.

 

Bringing Intelligence to the Edge with Cloud IoT

There are also many benefits to be gained from intelligent, real-time decision-making at the point where these devices connect to the network—what’s known as the “edge.” Manufacturing companies can detect anomalies in high-velocity assembly lines in real time. Retailers can receive alerts as soon as a shelved item is out of stock. Automotive companies can increase safety through intelligent technologies like collision avoidance, traffic routing, and eyes-off-the-road detection systems.

But real-time decision-making in IoT systems is still challenging due to cost, form factor limitations, latency, power consumption, and other considerations. We want to change that.2018-08-03 10:37:02+00:00 Read the full story.

 

How Machine Learning Is Changing The Software Development Paradigm

Can machine learning be used to accelerate the development of traditional software development lifecycle? As artificial intelligence and other techniques get increasingly deployed as key components of modern software systems, the hybridisation of AI and ML and the resultant software is inevitable. According to a research paper from the University of Gothenburg, AI and ML technologies are increasingly being componentised and can be more easily used and reused, even by non-experts. Recent breakthroughs in software engineering have helped AI capabilities to be effectively reused via RESTful APIs as automated cloud solutions.
2018-08-04 07:45:04+00:00 Read the full story.

 

Liquidnet Launches Discovery as Part of VHT Platform

Block trading venue and technology firm Liquidnet just upped the ante on its Virtual High Touch platform. The firm just launched Discovery – the first integration from the OTAS acquisition and is the start of a new generation of trader intelligence tools designed to facilitate more value-added conversations with portfolio managers, elevating their desk’s ability to generate short-term alpha.

Virtual High Touch combines advanced data analytics, unique liquidity sourcing tools, advanced algorithms, and real-time decision support. The idea behind VHT is that technology – when delivered in a meaningful, insightful and actionable way – can make the difference in terms of capturing and delivering alpha. Discovery is the culmination of seven years of research and development, leveraging AI and machine learning that draws in massive amounts of market data, distills it into actionable insight tailored to every order on a blotter that is synced with Liquidnet.
2018-07-31 07:07:16-04:00 Read the full story.

 

Aera Technology’s Cognitive OS Helps Fine Tune Business Operations

For all the development work that’s been done on enterprise applications that use artificial intelligence to analyze corporate data, the CEO of Aera Technology believes companies aren’t gaining many insights into how well their businesses are performing.

“There’s been a lot of work done to help companies process information, but there hasn’t been any work done to help them think,” Fred Laluyaux, president and CEO of Aera Technology told eWEEK at a briefing here at his company’s headquarters. Aera Technology is trying to change that by deploying a cloud-based cognitive operating system that is designed to analyze operational data to show corporate decision makers how they can take actions that improve business performance.
2018-07-31 00:00:00 Read the full story.

 

UOB to launch digital-only bank

United Overseas bank is to launch a digital-only bank to cater for the massive and increasing rate of mobile-first consumers in South East Asia. UOB says the new bank will be powered by next-generation artificial intelligence, machine learning, data analytics, user interface design and smartphone capabilities, leveraging inhouse expertise alongside innovations provided by its recent credit assessment joint venture with Avatec.ai and its investment and partnership with Personetics.
2018-08-03 10:33:00 Read the full story.

 

Aura, Telefónica’s AI, learns the language of people to transform customer engagement

With over 350 million customers in 17 countries, Telefónica is one of the largest telecommunications companies in the world. But the Spanish-based organization wants to do more than connect people with mobile, landline, internet and pay TV services. It wants to make digital life easier for customers.

Founded in 1924, Telefónica has transformed into a modern, data-driven company in recent years, with major investments in infrastructure and technology. The upgrades enabled the company to launch Aura, an artificial intelligence-powered digital assistant that “learns the language of people so that they don’t have to learn the language of machines,” says Telefónica. It is available in Spain, Brazil, the United Kingdom, Germany, Argentina and Chile through mobile apps, webs and third-party channels including Facebook Messenger and Google Assistant.
2018-08-01 09:00:03-07:00 Read the full story.

 

Deriving value from data: How AI can power smarter credit decisions

The reality of AI creates an opportunity – and a responsibility – for banks to team up with fintech companies in order to stay ahead of emerging competition from disruptors.

Banks have put AI on the innovations agenda for years now, but have yet to execute. Meanwhile, the underserved SME financing market, a $2.6tn opportunity, could be snatched up by e-commerce platforms, payment processors, and even telecommunication companies. The time for banks to put AI into action is now before they become displaced by disruptors. Former Cisco chairman and CEO John Chambers famously said that more than 40% of businesses would disappear over the next decade if they fail to execute on an AI strategy. That was back in 2015, so time is of the essence.
2018-08-01 00:00:00 Read the full story.

 

Digital Guardian Improves Data Loss Prevention With Behavior Analytics

Digital Guardian will announce on Aug. 6 that it is bringing user and entity behavior analytics (UEBA) capabilities to its Data Protection Platform.

The new UEBA capabilities will complement the data loss prevention (DLP) features in Digital Guardian’s platform, enabling organizations to more closely align identity and user behavior with security policy and enforcement. The UEBA feature makes use of machine learning to gain insight into user behavior to identify potential malicious actions.
2018-08-03 00:00:00 Read the full story.

 

Mindshare Medical launches AI cancer screening tech that can see data ‘beyond our perception’

You might think diagnosing cancer is easy: Someone either has a cancerous tumor or they don’t.

In practice, it’s much harder. Telling the difference between a deadly and a harmelss lump is literally “beyond our perception” with current imaging technology, Ilya Goldberg told GeekWire. That’s where he thinks artificial intelligence can step in.

Goldberg is a longtime biologist and machine learning expert, and he is the co-founder and CTO of Mindshare Medical. The startup is developing artificial intelligence tools that can diagnose cancer using imaging data that is invisible to the human eye.
2018-08-02 15:40:51-07:00 Read the full story.

 

Interview: How Google Cloud CEO Diane Greene is navigating the tricky world of cloud-based AI

It has been a weird year for Google Cloud CEO Diane Greene. Just as the company’s cloud-computing division has started to hit its stride, signing enterprise deals and reaping the rewards of its tech investments in container technology, employee backlash over artificial-intelligence contracts with the military have kept things interesting during Greene’s third year running the third-place cloud-computing company.

While Google is still looking up at Amazon Web Services and Microsoft Azure when it comes to infrastructure cloud computing, it appears to be finding the balance between keeping engineers happy with cloud-native computing tools and courting enterprise company suits with service-level agreements and steak dinners.
2018-07-30 17:34:42-07:00 Read the full story.

 

BaFin: Big Data Meets Artificial Intelligence Study

How do technological developments in data processing and analysis impact the financial sector? What are the implications for financial stability, market supervision, firm supervision, and collective consumer protection?

The “Big Data meets Artificial Intelligence” report, which BaFin published on 15 June 2018 , helps to answer these questions.
2018-08-06 05:25:38-04:00 Read the full story.

 

Scaling Game Simulations with DataFlow – Tetris

Dataflow is a great tool for building out scalable data pipelines, but it can also be useful in different domains, such as scientific computing. One of the ways that I’ve been using Google’s Cloud Dataflow tool recently is for simulating gameplay of different automated players.

Years ago I built an automated Tetris player as part of the AI course at Cal Poly. I used a metaheuristic search approach, which required significant training time to learn the best values for the hyperparameters. I was able to code a distributed version of the system to scale up the approach, but it took significant effort to deploy on the 20 machines in the lab. With modern tools, it’s trivial to scale up this code to run on dozens of machines.

It’s useful to simulate automated players in games for a number of reasons. One of the most common reasons is test for bugs in a game. You can have bots hammer away at the game until something breaks. Another reason for simulating gameplay is to build bots that can learn and play at a high level. There’s generally three ways of simulating gameplay: Real Time, Turbo and  Headless.
2018-08-04 23:15:47.008000+00:00 Read the full story.

 

HSBC joins $20m funding round for financial crime specialist Quantexa

Big data and enterprise intelligence outfit Quantexa has raised $20 million in a Series B funding round led by Dawn Capital and backed by HSBC and Albion Capital. Quantexa’s technology uses real-time entity resolution, network analytics and AI to knit together vast and disparate data sets and derive actionable intelligence to fight financial crime.

Founded in 2016, the London-headquartered firm now boasts a team of 90 staffers and has this year scored money laundering prevention deals with HSBC and Deloitte.
2018-08-03 00:01:00 Read the full story.

 

AI Weekly: Amazon Echo is basically a Dash button with speakers

Google and Amazon do not care much where you speak with their respective assistant. As long as you’re on team Alexa, Amazon doesn’t really mind that Show Mode appears to cannibalize the Echo Show. Of course, once you’re locked into Google Assistant, you may be more likely to choose YouTube TV over Prime Video, Google Pay over Amazon Pay, and most importantly, Google Express over Amazon’s massive online marketplace.

The news is also indicative of a gradual shift toward more visual experiences with AI assistants, which makes sense for a number of reasons.
2018-08-03 00:00:00 Read the full story.

 

The top data structures you should know for your next coding interview

Niklaus Wirth, a Swiss computer scientist, wrote a book in 1976 titled Algorithms + Data Structures = Programs. Forty plus years later, that equation still holds true. That’s why software engineering candidates have to demonstrate their understanding of data structures along with their applications.

Almost all problems require the candidate to demonstrate a deep understanding of data structures. It doesn’t matter whether you have just graduated (from a university or coding bootcamp), or you have decades of experience. Sometimes interview questions explicitly mention a data structure, for example, “given a binary tree.” Other times it’s implicit, like “we want to track the number of books associated with each author.”
2018-07-30 22:29:48.695000+00:00 Read the full story.

 

B.R.AI.N. Index Tracks Disruptive Technologies

As part of the expanding STOXX thematic offering, we are excited to introduce a new index tracking four technologies transforming business globally.

The iSTOXX® Developed Markets B.R.AI.N. Index is made up of companies that generate more than 50% of their revenue from biotechnology, robotics, artificial intelligence (AI) and nanotechnology. In covering these four trends, the index aims to give investors access to the economic benefits of modern industrial change.
2018-08-06 05:44:59-04:00 Read the full story.

 

Weekly Selection — Aug 3, 2018 – Towards Data Science

 

  • AutoKeras: The Killer of Google’s AutoML
  • 5 Resources to Inspire Your Next Data Science Project
  • Deploying Keras Deep Learning Models with Flask
  • Graphs & paths: PageRank
  • Interactive Data Visualization with D3.js
  • Brewing up custom ML models on AWS SageMaker
  • Using Uncertainty to Interpret your Model
  • Graphs and ML: Multiple Linear Regression

2018-08-03 12:54:27.800000+00:00 Read the full story.

 

 


Behind a Paywall…

The CEO of a hot Silicon Valley startup who built a product used by more than 15,000 companies reveals the single way you’ll know whether or not people will actually buy your product

Peter Reinhardt, CEO and co-founder of data analytics company Segment, discovered how to find right product fit the hard way: He struggled to find a proper application for his company’s product twice before hitting gold the third time around.

In retrospect, he says there was a fundamental difference in early conversations surrounding the current iteration of his product, compared to the examples that turned out to be less successful.
2018-08-05 00:00:00 Read the full story.

 


This news clip post is produced algorithmically based upon CloudQuant’s list of sites and focus items we find interesting. If you would like to add your blog or website to our search crawler, please email customer_success@cloudquant.com. We welcome all contributors.

This news clip and any CloudQuant comment is for information and illustrative purposes only. It is not, and should not be regarded as “investment advice” or as a “recommendation” regarding a course of action. This information is provided with the understanding that CloudQuant is not acting in a fiduciary or advisory capacity under any contract with you, or any applicable law or regulation. You are responsible to make your own independent decision with respect to any course of action based on the content of this post.