Day 2 – Thursday, March 30, 2017

7:15 AM Breakfast & Networking

Location: Ballroom Reception

8:00 AM Introduction and Logistics

Location: Ballroom

8:10 AM Welcome

Location: Ballroom
Speaker: Bob Morgan
Bob will open the conference and welcome everyone to Charlotte.

8:30 AM Keynote Speaker – Nate Silver

Location: Ballroom
Introductions by: Ned Carroll

Fireside chat with Nate

Speaker: Tracy Kerrins
Tracy will be engaging Nate in conversation about a wide range of topics in the Fireside Chat portion of his keynote address.

The Signal and the Noise

Speaker: Nate Silver
Founder,, Author, “The Signal And The Noise.” Fast Company’s No. 1 “Most Creative People in Business.” Mr. Silver will speak on “Powerful Predictions Through Data Analytics.” Nate Silver has become today’s leading statistician through his innovative analyses of political polling. He first gained national attention during the 2008 presidential election, when he correctly predicted the result of the primaries and the presidential winner in 49 states. In 2012, he called 50 of the 50 states.

9:45 AM Keynote Speaker – Tim Guerry

Location: Ballroom
Introduction by: Vinay Mummigatti

Data Provenance – A Brief History and a Bright Future Reimagining the Data Supply Chain in a Big Data World

Speaker: Tim Guerry
The concept of the data-driven enterprise has become a client expectation and a business imperative. The future belongs to those who are able to unleash fresh insights that sit dormant inside rich data sources. Case in point, Amazon now derives 35% of its sales from highly relevant suggestions. Or consider Netflix, which made a successful $100 million bet on House of Cards based on detailed viewing, director, actor and subject matter data from 30+ million users. But insights will only be as good as the data from which they are derived. Accuracy and timeliness of data can be much more important than breadth.

What’s encouraging is a maturing set of technologies that make utilizing rich data to create client and business value possible: data storage is no longer the obstacle it once was, the number of data sources is exploding, and new tools for analysis, data visualization, data management and robust modeling, are available and evolving.

Despite all of this, “getting the data” is still a major impediment for unleashing these powerful insights. Many of our data supply lines are prisoners of the past—they are overly complex and siloed, batch oriented, dependent on disparate technologies, and require manual upkeep. Stronger data management practices which could help, sometimes lead to a “right vs. fast” divergence that further complicates the eco-system.

It’s time to rethink and redesign the entire data supply chain using the technology, tools and techniques now at our disposal.

10:45 AM AM Break & Expo

Location: Ballroom Reception

11:00 AM Break Out Sessions VI

North Carolina Opportunities in the Data Economy

Location: Salon 2
Speaker: Shannon McKeen

Building a ‘Data-Driven Culture’ in your organization

Location: Salon 3
Speaker: Amaresh Tripathy
The biggest barrier and risk for analytics and data science in any organization is the inertia to act on the ‘insights’ of the model. Senior leaders in organizations who have made the technology and people investments are increasingly frustrated by lack of top or bottom line impact – especially when analytics is not the ‘silver bullet’. Most of the times the blame is put on the lack of ‘data-driven culture’ in the organization. The discussion will break down the analytical culture into discrete behaviours and tactics , along with some interesting real world stories of organizations who were able to overcome the inertia and make analytics real. This is based on my experience with more than 50 organizations where I have been involved in driving change through analytics.

Big Data’s Disparate Impact

Location: The Great Room II
Speaker: Andrew Selbst
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm’s use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court.

Addressing the sources of this unintentional discrimination will be difficult technically, difficult legally, and difficult politically. In the absence of a demonstrable intent to discriminate, the best legal hope for data mining’s victims would seem to lie in disparate impact doctrine. That hope would largely be in vain. After an overview of American anti-discrimination law, offered through the lens of Title VII’s prohibition of discrimination in employment, it will become possible to understand why the standard approach to discrimination law will be difficult to apply here. These challenges also throw into stark relief the tension between the two major theories underlying anti-discrimination law: anti-classification and anti-subordination. Finding a solution to big data’s disparate impact will require more than best efforts to stamp out prejudice and bias. Rather, it will require new regulatory strategies, some of which may require that we once again reexamine the meanings of “fairness” and “bias.”

Data: The Power & Promise of Precision Health

Location: Salon 1
Speakers: Rebecca Boyles, Sara Imhof (moderator), Alan Menius, & Winfred Shaw
Precision health allows the tailoring of medical care to the characteristics of individual patients. It allows preventive, therapeutic and diagnostic interventions to be tailored for those who may be expected to benefit, sparing expense and side effects for those who will not.

How will we accomplish this “precision health?” A significant share of the power and promise of precision health will come from the collecting and assembling data from disparate human and electronic sources (e.g., genetics, family history, environment, lifestyle), analyzing and understanding the data, and then communicating the data analysis in a way that is actionable (as appropriate) for improved health outcomes.

This panel addresses the challenges and opportunities for precision health from industry and research perspectives on clinical trials, drug development and safety, bioinformatics, and pharmaceutical data analytics.

11:45 AM Lunch & Expo

Location: Ballroom Reception

12:45 PM Keynote Speaker – Ric Elias

Segment of One

Location: Ballroom
Speaker: Ric Elias
Red Ventures CEO and passenger on US Airways flight 1549, the “Miracle on the Hudson,” discusses the convergence of luck and data in business and in life.

2:00 PM Break Out Sessions VII

The Future of Healthcare: Driven by Analytics

Location: Salon 1 & 2
Speaker: Michael Dulin
This session will discuss a novel framework for the implementation of population and public health initiatives driven by data and best evidence. In addition, future directions for the advancement of healthcare delivery will be covered including a review of patient centered approaches to care and the application of next generation big data/analytics software to create patient engaged healthcare delivery models.

Next Generation Analytics Architecture

Location: Salon 3
Speakers: Nitin Agrawal, Shashank Rao, & Mark Shilling
As financial institutions seek to grow revenues and improve customer experience through Omni channel interactions and transactions, they open themselves up to new, more sophisticated fraudulent and criminal activities. In addition, stricter regulations and increasing scrutiny on anti-money laundering and suspicious activity reporting require robust and trusted fraud detection capabilities

Fraud detection systems in most banks have traditionally been reactive in nature, with suspicious transactions being investigated and analyzed after the fact, offering very little ‘real’ protection from fraud. However, with emerging trends in analytics techniques and data architecture, banks and financial institutions are seeing significant opportunities to modernize their fraud detection and management functions. Predictive modeling are replacing transactional rule based detection engines to score and detect potential fraudulent transactions. Advances in storage architecture is enabling use of full historical data sets instead of samples to greatly improve model accuracy. In addition, banks are looking to incorporate machine learning into fraud detection models to continuously adapt to fast changing environments and behaviors

Our presentation will provide an overview of next generation data architecture patterns that overcome the limitations of traditional fraud management systems and enable more proactive, accurate and nimble fraud management functions in banks.

2:45 PM Afternoon Break & Expo

Location: Ballroom Reception

3:15 PM Break Out Sessions VIII

Payments Industry Analytics

Location: Salon 1
Speakers: Nitin Agrawal & Tushar Puranik
The payments industry is undergoing significant disruption. Billions of transactions are flowing through the payments ecosystem creating huge volumes of data – this is one industry that really knows where the individuals and businesses are spending!! The industry is eager to leverage this data not only for driving its own growth and efficiency but also provide insight and value to other industries. Innovations in data sciences and technology have been a key enabler of this phenomenon. In this session we will explore the key applications of data sciences to the Payments industry and the advances in data sciences that have enabled this application.

Accelerating adoption of Analytics in Financial Services Industry

Location: Salon 2
Speakers: Shiva Kumar, Lindsay Marshall, & Aish Sabbisetty
Many companies are aspiring to stay ahead with use of Advanced Analytics. While the popularity and buzz around analytics has helped with initial adoption companies are struggling to find the right model to grow and accelerate the use of Analytics, and prove the value within their business. The presentation provides an overview of the journey at Brighthouse Financial (previously Metlife Retail) with Advanced Analytics group since its inception a couple of years back. The Data Science group is growing in size and influence and has identified key factors for success which they intend to share through some examples and stories.

Driving Digitization: A Model for Corporate Analytics Training

Location: Salon 3
Speakers: Dan McGurrin & Pamela Webber
This session will provide real life use cases on how Cisco in collaboration with global universities, including North Carolina State University, had implemented an educational model to accelerate the Digitization of Cisco’s business processes. Training is in place for employees from Senior Executives to individual contributors, and at each level of the organization, the transformation is yielding tangible outcomes. North Carolina State University, as one of Cisco’s key education partners, will also participate in this session to provide their perspective on curriculum development, program delivery, and the path from academic training to business outcomes.

Artificial Intelligence in the Enterprise

Location: The Great Room II
Speaker: Ron Bodkin
Near perfect language translation, better than human image labeling, Natural Language Understanding, dominating humans in strategy games, self-driving cars – What do all these achievements by machines have in common? Deep Learning – driven by significant improvements in Graphic Processing Units and complex computational models inspired by the human brain that excel at capturing structures hidden in massive datasets. These techniques have been pioneered at research universities and Internet behemoths but are now finding their way into the mainstream enterprise through open source tools and hardware offerings, benefiting from a steady decline in cost of building large, parallel models at scale, inspired by unmatched predictive accuracy in many application areas. In this session we will discuss how Deep Learning technology can be integrated into mainstream enterprises to unlock significant business value and transform industries. In this talk we look at how Deep Learning is affecting the enterprise with an emphasis on financial services use cases like fraud detection, mobile personalization based on individual behavior, face recognition for authentication and data center optimization. We dive deep into how a large bank’s existing fraud detection engine was enhanced with deep learning algorithms that analyse tens of thousands of latent features. While the bank’s existing system was effective at blocking fraud, largely based on handcrafted rules created by the business, on intuition and some light analysis, it had a high rate of false positives which created expenses and inconvenience. It had proved increasingly challenging and costly to update and maintain as fraudsters evolved their capabilities with increasing speed. We look at how the business assessed the opportunity to use AI, the use of an agile Analytics Ops approach, and the results of applying AI to detect fraud.

Model Risk Management

Location: The Den
Speaker: Richard Cooperstein
There was a time when developing, approving and using models was like art appreciation—especially for longer-term financial instruments without readily observable prices. Model builders would convene an exhibition for interested parties to critique results. If the graphs looked good the models got used; and they might be adjusted over time so results continued to look good. However, the Big Questions below suggest that the stakes are too high for an artsy process. In recognition, OCC Bulletin 2011-12 provides comprehensive guidance on model risk management, expanding on OCC 2000-16 that focused on model validation. Validating models is now viewed as one component of a more complete process to manage all phases of model risk that particularly include soundness, usage and governance.

4:15 PM Keynote Speaker – Ritika Gunnar

Transforming with Cloud and Artificial Intelligence

Location: Ballroom
Speaker: Ritika Gunnar

5:15 PM Expo

Location: Ballroom Reception