Table of Contents
-
FREE
REGISTRATION
HERE!
-
Become a sponsor
HERE!
-
New to supercomputing? Click
here.
-
KEYNOTE:
Jim
Kurose,
National Science Foundation
-
PLENARY:
Henry Neeman,
University of Oklahoma
-
PLENARY PANEL:
Carl Grant,
University of Oklahoma
-
PLENARY PANEL:
Adrian
W. Alexander,
University of Tulsa
-
PLENARY PANEL:
Jennifer Fitzgerald,
Samuel Roberts Noble Foundation
-
PLENARY PANEL:
Mark Laufersweiler,
University of Oklahoma
-
PLENARY PANEL:
Robin Leech,
Oklahoma State University
-
PLENARY PANEL:
Habib Tabatabai,
University of Central Oklahoma
-
PLENARY:
Platinum Sponsor Speaker:
Monica
Martinez-Canales,
Intel
-
PLENARY:
Platinum Sponsor Speaker:
Stephen Wheat,
HP
-
Kate Adams,
Great Plains Network
-
Daniel Andresen,
Kansas State University
-
Joseph A. Babb,
Tinker Air Force Base
-
Dana Brunson,
Oklahoma State University
-
Bob
Collins,
Qumulo
-
Eduardo
Colmenares,
Midwestern State University
-
Bob Crovella,
NVIDIA
-
Nicholas A. Davis,
University of Oklahoma - Tulsa
-
Dan DeBacker,
Brocade Communications Systems, Inc.
-
Kendra Dresback,
University of Oklahoma
-
James Ferguson,
National Institute for Computational
Sciences
-
Karl Frinkle,
Southeastern Oklahoma State University
-
John Hale,
University of Tulsa
-
Peter J Hawrylak,
University of Tulsa
-
Kyle Hutson,
Kansas State University
-
Utkarsh Kapoor,
Oklahoma State University
-
Andrew Kongs,
University of Tulsa
-
Scott Lathrop,
XSEDE/Shodor Education Foundation, Inc.
-
David
R. Monismith Jr.
-
Mike Morris,
Southeastern Oklahoma State University
-
Mukundhan Selvam,
Wichita State University
-
D. Kent Snider,
Mellanox Technologies
-
DJ Spry,
Dell Inc.
-
Dan Stanzione,
Texas Advanced Computing Center,
University of Texas
-
Mickey Stewart,
Arista Networks
-
Adam Tygart,
Kansas State University
-
Neal Wingenbach,
Quantum
-
Neal N. Xiong,
Southwestern Oklahoma State University
Other speakers to be announced
PLENARY
SPEAKERS
Assistant Director
Directorate
for Computer & Information
Science & Engineering
(CISE)
National
Science Foundation
Topic:
KEYNOTE
"Cyberinfrastructure:
An NSF Update and Reflections on
Architecture, Reference Models and Community"
Slides:
available after the Symposium
Abstract
Cyberinfrastructure is critical to
accelerating discovery and innovation
across all disciplines.
In order to support these advances,
the
National
Science Foundation
(NSF)
supports
a dynamic cyberinfrastructure ecosystem
composed of multiple resources
including
data,
software,
networks,
high-end computing,
and
people.
I will discuss
NSF's strategy to ensure
that researchers across the U.S.
have access to
a diversity of these resources
to continue our nation's ability to be
the discovery and innovation engine
of the world.
I will also provide an update on
cyberinfrastructure activities within NSF,
and reflect on the importance of
layered CI architectures and
reference models
for accelerating
the pace of scientific discovery.
Biography
Dr. Jim Kurose
is the Assistant Director of the
National Science Foundation
(NSF)
for
Computer and Information
Science and Engineering
(CISE).
He leads the CISE Directorate,
with an annual budget of
more than $900 million,
in its mission
to uphold the nation's leadership in
scientific discovery and engineering innovation
through its support of fundamental research in
computer and information
science and engineering
and transformative advances in
cyberinfrastructure.
Dr. Kurose is on leave from the
University of Massachusetts, Amherst,
where he is a Distinguished Professor in the
School of Computer Science.
He has also served in a number of
administrative roles at UMass
and has been a Visiting Scientist at
IBM
Research,
INRIA,
Institut
EURECOM ,
the
University
of Paris,
the
Laboratory
for Information, Network and
Communication Sciences,
and
Technicolor
Research Labs.
His research interests include
network protocols and architecture,
network measurement,
sensor networks,
multimedia communication,
and modeling and performance evaluation.
Dr. Kurose has served on
many national and international
advisory boards.
He has received numerous awards
for his research and teaching,
including several conference best paper awards,
the
IEEE
Infocom Achievement Award,
the
ACM
Sigcomm
Test of Time Award,
a number of outstanding teacher awards,
and the
IEEE/CS
Taylor Booth Education Medal.
With Keith Ross,
he is the co-author of the textbook,
Computer Networking,
a top down approach
(6th edition)
published by
Addison-Wesley/Pearson.
Dr. Kurose received
his Ph.D. in computer science from Columbia University and a Bachelor of Arts degree in physics from Wesleyan University.
He is a Fellow of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronic Engineers (IEEE).
Assistant Vice President/Research
Strategy Advisor
Information
Technology
Director
OU
Supercomputing Center for Education
& Research (OSCER)
Information
Technology
Associate Professor
College
of Engineering
Adjunct Associate Professor
School
of Computer Science
University
of Oklahoma
Topic:
"OSCER State of the Center Address"
Slides:
PowerPoint
PDF
Talk Abstract
The
OU
Supercomputing Center for
Education & Research
(OSCER)
celebrated its 14th anniversary
on August 31 2015.
In this report,
we examine
what OSCER is,
what OSCER does,
what OSCER has accomplished
in its 13 years,
and where OSCER is going.
Biography
Dr.
Henry Neeman
is the
Director of the
OU
Supercomputing Center for Education &
Research,
Assistant Vice President
Information Techology
–
Research Strategy Advisor,
Associate Professor in the
College
of Engineering
and
Adjunct Associate Professor in the
School
of Computer Science
at the
University of
Oklahoma.
He received his BS in computer science
and his BA in statistics
with a minor in mathematics
from the
State
University of New York at Buffalo
in 1987,
his MS in CS from the
University of
Illinois at Urbana-Champaign
in 1990
and his PhD in CS from UIUC in 1996.
Prior to coming to OU,
Dr. Neeman was a postdoctoral research
associate at the
National
Center for Supercomputing Applications
at UIUC,
and before that served as
a graduate research assistant
both at NCSA
and at the
Center for
Supercomputing Research &
Development.
In addition to his own teaching and research,
Dr. Neeman collaborates with
dozens of research groups,
applying High Performance Computing techniques
in fields such as
numerical weather prediction,
bioinformatics and genomics,
data mining,
high energy physics,
astronomy,
nanotechnology,
petroleum reservoir management,
river basin modeling
and engineering optimization.
He serves as an ad hoc advisor
to student researchers
in many of these fields.
Dr. Neeman's research interests include
high performance computing,
scientific computing,
parallel and distributed computing
and
computer science education.
Associate Dean &
Chief Technology Officer
University
of Oklahoma Libraries
University
of Oklahoma
Topic:
"Panel:
Are We Wrangling, Managing or Maximizing
Our Organizations' Research Data?"
(Moderator)
Abstract
Organizations are facing
a wide range of issues and challenges
in dealing with the complexity of
meeting government mandates for
open access to research data.
These considerations include
knowing how to inventory the datasets,
describe them with relevant metadata,
enable and promote their access
so researchers can find them,
enable dataset reusability.
The panel will also explore
the issues involved in
developing citation and curation
guidelines/policies.
Finally,
the costs involved in doing all this
will be examined,
including whether these costs should be
part of the Indirect Cost rates,
and how that might be achieved.
The goal is
for the session to provide attendees
a richer understanding of
the full range of issues involved in
maximizing their organizations' research data
and to examine ideas on how to proceed.
Biography
Carl Grant
is the
Associate Dean &
Chief Technology Officer
at the
University
of Oklahoma Libraries.
Previously,
he was the
Chief Librarian
and
President
of
Ex
Libris North America.
Mr. Grant has held
senior executive positions in,
and/or been the founder of,
a number of other
library-automation companies.
He has shown his commitment to libraries,
librarianship,
and industry standards
via his participation in the
Coalition
for Networked Information
(CNI),
the
American
Library Association
(ALA)
and
the
Association
of College & Research Libraries
(ACRL),
the
Library
Information Technology Association
(LITA),
and on the board of the
National
Information Standards Organization
(NISO),
where he has held offices as board member,
treasurer,
and
chair.
Under Mr. Grant's chairmanship,
NISO underwent a transformation
that resulted in
a revitalized library standards organization.
In recognition of his contribution to
the library industry,
Library
Journal
has named Mr. Grant an "Industry Notable."
Mr. Grant holds
a master's degree in
Library
& Information Science
from the
University
of Missouri at Columbia.
R. M. and Ida McFarlin Dean of the Library
McFarlin
Library
University
of Tulsa
Topic:
"Panel:
Are We Wrangling, Managing or Maximizing
Our Organizations' Research Data?"
Panel Abstract
Organizations are facing
a wide range of issues and challenges
in dealing with the complexity of
meeting government mandates for
open access to research data.
These considerations include
knowing how to inventory the datasets,
describe them with relevant metadata,
enable and promote their access
so researchers can find them,
enable dataset reusability.
The panel will also explore
the issues involved in
developing citation and curation
guidelines/policies.
Finally,
the costs involved in doing all this
will be examined,
including whether these costs should be
part of the Indirect Cost rates,
and how that might be achieved.
The goal is
for the session to provide attendees
a richer understanding of
the full range of issues involved in
maximizing their organizations' research data
and to examine ideas on how to proceed.
Biography
Adrian Alexander
has served as the first
R. M. and Ida McFarlin Dean of the
McFarlin
Library
at the
University
of Tulsa
since February 2007.
Prior to that,
he was the first Executive Director of the
Greater
Western Library Alliance,
a non-profit consortium
representing 31 academic research libraries.
In his nine years at GWLA,
he organized and managed
a variety of collaborative library projects,
including
cooperative collection development,
electronic database licensing,
digital library development,
electronic publishing,
and
interlibrary loan.
He also spent 13 years
on the commercial side
of the information industry,
in a variety of
sales,
sales management
and
marketing management
roles with
a major serials subscription company.
Adrian was also a co-founder of
BioOne, Inc.,
a not-for-profit,
electronic publishing enterprise
that launched
a new scholarly publishing model
based on collaboration between
scholarly societies and academic libraries.
Adrian holds a Masters' degree in
Library Science
and a
Certificate
of Advanced Study
in
academic library administration
from the
University
of North Texas.
He is also the 2010 recipient of the
Outstanding
Alumnus Award
from the
College
of Information
at the
University of North Texas.
Data Curator
Library
Samuel
Roberts Noble Foundation
Topic:
"Panel:
Are We Wrangling, Managing or Maximizing
Our Organizations' Research Data?"
Panel Abstract
Organizations are facing
a wide range of issues and challenges
in dealing with the complexity of
meeting government mandates for
open access to research data.
These considerations include
knowing how to inventory the datasets,
describe them with relevant metadata,
enable and promote their access
so researchers can find them,
enable dataset reusability.
The panel will also explore
the issues involved in
developing citation and curation
guidelines/policies.
Finally,
the costs involved in doing all this
will be examined,
including whether these costs should be
part of the Indirect Cost rates,
and how that might be achieved.
The goal is
for the session to provide attendees
a richer understanding of
the full range of issues involved in
maximizing their organizations' research data
and to examine ideas on how to proceed.
Biography
Jennifer Fitzgerald
is the data curator at the
Samuel
Roberts Noble Foundation
Library.
Her involvement includes
new employee orientations,
oversight of the
electronic laboratory notebook (ELN)
for researchers,
recommendations and training for
the Foundation's upcoming
enterprise content management system,
and serving on the
Data Management Committee.
She received a master's degree from
Southeastern
Oklahoma State University
in 2009.
Research Data Specialist
University
of Oklahoma Libraries
University
of Oklahoma
Topic:
"Panel:
Are We Wrangling, Managing or Maximizing
Our Organizations' Research Data?"
Abstract
Organizations are facing
a wide range of issues and challenges
in dealing with the complexity of
meeting government mandates for
open access to research data.
These considerations include
knowing how to inventory the datasets,
describe them with relevant metadata,
enable and promote their access
so researchers can find them,
enable dataset reusability.
The panel will also explore
the issues involved in
developing citation and curation
guidelines/policies.
Finally,
the costs involved in doing all this
will be examined,
including whether these costs should be
part of the Indirect Cost rates,
and how that might be achieved.
The goal is
for the session to provide attendees
a richer understanding of
the full range of issues involved in
maximizing their organizations' research data
and to examine ideas on how to proceed.
Biography
Dr. Mark Laufersweiler
has always had a strong interest in
computers,
computing,
data
and
data visualization.
Upon completing his post-doc
work for
the
Atmospheric
Radiation Measurement
(ARM)
program,
he was the lead computer systems administrator
for 3.5 years
serving the
Florida
State University
Department
of Earth, Ocean and Atmospheric Science.
He was then
the Computer Systems Coordinator for
the
University
of Oklahoma
School
of Meteorology
from 1999-2013.
Part of his duties included
managing the real time data feed
and maintaining the departmental data archive.
He assisted with faculty
in their courses
to help foster computing skills
needed for the classroom
and instruction based on
current best practices regarding
research data and code development.
Since the Fall of 2013,
he has served as the
Research Data Specialist
for the
University
of Oklahoma Libraries.
He is currently assisting
the educational mission of the Libraries
by developing and offering
workshops,
seminars
and
short
courses,
helping to inform
the university community
on best practices
for
data management and data management planning.
He is also working on
the formation of a data repository
to host research data generated by
the university community.
He is a strong advocate of
open source software
and
open access to data.
In 2008,
Dr. Laufersweiler was
awarded
the
Russell
L. DeSouza Award.
This award,
sponsored by
Unidata
is for individuals whose
energy,
expertise,
and active involvement
enable the Unidata program
to better serve geoscience.
Honorees personify Unidata's ideal of
a community that shares
data,
software,
and
ideas
through computing and networking technologies.
Associate Dean for Library Operations
Oklahoma
State University
Topic:
"Panel:
Are We Wrangling, Managing or Maximizing
Our Organizations' Research Data?"
Panel Abstract
Organizations are facing
a wide range of issues and challenges
in dealing with the complexity of
meeting government mandates for
open access to research data.
These considerations include
knowing how to inventory the datasets,
describe them with relevant metadata,
enable and promote their access
so researchers can find them,
enable dataset reusability.
The panel will also explore
the issues involved in
developing citation and curation
guidelines/policies.
Finally,
the costs involved in doing all this
will be examined,
including whether these costs should be
part of the Indirect Cost rates,
and how that might be achieved.
The goal is
for the session to provide attendees
a richer understanding of
the full range of issues involved in
maximizing their organizations' research data
and to examine ideas on how to proceed.
Biography
Robin Leech
is Associate Dean for Library Operations at
Oklahoma
State University,
supervising Technical Services, Systems,
Digital Initiatives and Access Services.
She lead the OSU institutional repository team
in the development of
SHAREOK.org,
a joint repository with the
University
of Oklahoma Libraries.
After completion of an MLS from the
University
of Oklahoma,
she worked in a wide variety of libraries:
public, school and academic.
Since 1990,
Robin has concentrated in
academic library automation/technical services,
first at the
OSU-Tulsa
campus
Library,
and since 2006,
at the main OSU Campus in Stillwater, Oklahoma.
Robin is a member of the
American
Library Association
(ALA),
the
Association
of College & Research Libraries
(ACRL),
the
Library
Information Technology Association
(LITA),
the Oklahoma Chapter of ACRL,
the
Oklahoma
Library Association,
and
the
Society
of Southwest Archivists.
Executive Director
Chambers
Library
University
of Central Oklahoma
Topic:
"Panel:
Are We Wrangling, Managing or Maximizing
Our Organizations' Research Data?"
Panel Abstract
Organizations are facing
a wide range of issues and challenges
in dealing with the complexity of
meeting government mandates for
open access to research data.
These considerations include
knowing how to inventory the datasets,
describe them with relevant metadata,
enable and promote their access
so researchers can find them,
enable dataset reusability.
The panel will also explore
the issues involved in
developing citation and curation
guidelines/policies.
Finally,
the costs involved in doing all this
will be examined,
including whether these costs should be
part of the Indirect Cost rates,
and how that might be achieved.
The goal is
for the session to provide attendees
a richer understanding of
the full range of issues involved in
maximizing their organizations' research data
and to examine ideas on how to proceed.
Biography:
Habib Tabatabai
currently serves as the Executive Director of
Chambers
Library
at the
University
of Central Oklahoma
(UCO).
He has more than 25 years of experience
working and leading in the library profession,
implementing and using technology
to facilitate
research, discovery, and preservation
for UCO users.
He is the current Chair of
Ex
Libris Users of North America
(ELUNA),
which advocates on behalf of
close to 3500 libraries worldwide
to improve
user experience and student learning.
Principal Engineer and
Director of Big Data
for Science and Technology
Big Data Pathfinding Group
Intel Corp
Topic:
"How HPC is Central to Bringing
Next Generation Sequencing
from the Lab to the Patient"
(with Stephen Wheat)
Slides:
PDF
Talk Abstract
Next Generation Sequencing (NGS),
the basis for volume sequencing enablement,
has been around for several years.
The volume of sequencers deployed per year
remains on an exponential growth path.
Nevertheless,
the vision of
sequencing-enabled personalized medicine
has come to fruition for
relatively few people.
The community consensus is that
bringing this to large populations
remains 5-7 years out.
Nevertheless,
some projects are underway
to path-find means to accelerate this.
In this talk,
we will review
the solution architecture that will enable this
from a technology perspective.
Furthermore,
we will review the efforts of
the Intel/HP HPC Alliance
with respect to driving these solutions
into actual implementation.
While the solutions architecture
will be focused on the NGS work flow,
the elements of the architecture
are pertinent to other HPC work flows.
Biography
Monica Martinez-Canales
is Principal Engineer and
Director of
Big Data for Science and Technology
in the
Big Data Pathfinding Group
at
Intel
Corporation.
The Pathfinding team is focused on
end-to-end research and development
to accelerate
scientific and technological big data,
predictive analytics,
and high performance computing efforts.
Monica joined Intel in 2008
leading Strategic Initiatives in
Validation Business Intelligence and
Analytics programs
within the
Platform Validation Engineering Group.
Monica's work on
dynamic risk-based
resource allocation strategies,
under schedule pressure and
resource constraints,
enabled the on-time completion of
post-silicon validation of the
4th generation Intel CPU family of products,
including a market-responsive
ultra-low power derivative.
Prior to joining Intel,
Monica had been a
Principal Member of the Technical Staff at
Sandia
National Laboratories,
leading award-winning research in
verification,
validation,
and quantifications of margins
under uncertainty in
complex systems within
defense and energy programs.
Monica completed a
National
Science Foundation
Post-Doctoral Fellowship at
Stanford
University.
Monica earned a Ph.D. in
Computational
and Applied Mathematics
from
Rice
University
and received a B.S. in
Mathematics
from
Stanford
University.
Monica is author of
multiple peer-reviewed journal articles.
Director, HPC Pursuits
Hewlett-Packard
Topic:
"How HPC is Central to Bringing
Next Generation Sequencing
from the Lab to the Patient"
(with Monica
Martinez-Canales)
Slides:
PDF
Talk Abstract
Next Generation Sequencing (NGS),
the basis for volume sequencing enablement,
has been around for several years.
The volume of sequencers deployed per year
remains on an exponential growth path.
Nevertheless,
the vision of
sequencing-enabled personalized medicine
has come to fruition for
relatively few people.
The community consensus is that
bringing this to large populations
remains 5-7 years out.
Nevertheless,
some projects are underway
to path-find means to accelerate this.
In this talk,
we will review
the solution architecture that will enable this
from a technology perspective.
Furthermore,
we will review the efforts of
the Intel/HP HPC Alliance
with respect to driving these solutions
into actual implementation.
While the solutions architecture
will be focused on the NGS work flow,
the elements of the architecture
are pertinent to other HPC work flows.
Biography
Dr. Stephen Wheat is the Director of the
HPC Pursuits team
within
Hewlett-Packard's
HPC business unit.
In this role,
he is responsible for driving
higher-end HPC world-wide business strategies
to meet the challenges
of leadership-class institutions.
Having recently joined HP's HPC business unit,
Dr. Wheat brings his 35-year HPC career
to bear on his new role.
He started in
the Oil and Gas applications domain in Houston,
then going to
AT&T
Bell Labs,
where the majority of his tenure was on
parallel HPC systems software
for sonar processing,
then going to
Sandia
National Labs,
where his
research was in
massively parallel systems software.
It was during his
tenure at Sandia that
he won the 1994
Gordon
Bell Prize
for performance.
Subsequently,
he spent 20 years at
Intel,
where he served in many leadership
HPC roles,
including being WW GM of HPC.
Dr. Wheat's Ph.D. is in Computer Science,
with a focus on
massively parallel
systems software.
His M.S. and B.S.
were also in Computer Science.
Dr. Wheat's extracurricular activities include
photography,
recreational
bicycling,
and flying,
where he is
a commercial multi-engine pilot
and
certified flight instructor
for instrument/multi-engine aircraft.
He is the father of four and
grandfather of nine.
He and his wife of 35 years,
Charlene,
live in Houston, Texas.
BREAKOUT
SPEAKERS
Research Assistant
Great
Plains Network
Topic:
"All About ENCITE Metrics"
Slides:
PowerPoint
PDF
Talk Abstract
ENCITE
is the
Great
Plains Network's
National
Science Foundation
Campus
Cyberinfrastructure –
Infrastructure, Innovation and
Engineering
(CC*IIE)
grant,
a two year grant
that started in August 2014.
ENCITE provides training on networking topics.
Network engineers use this information
to help researchers get their research done.
This talk will discuss metrics of success
of the project so far.
Biography
Kate Adams
has been with the
Great
Plains Network
since November of 2009.
She is the project coordinator for ENCITE,
helps facilitate various working groups,
keeps the website up to date,
and is the system administrator,
layout artist,
and was also GPN's first
regional
XSEDE
Champion.
She enjoys sewing, writing, gardening, and
martial arts in her free time.
Associate Professor
Department of
Computing & Information Sciences
Kansas State
University
Director
Institute for Computational Research
Topic:
"Big Storage, Little Budget"
(with
Kyle
Hutson
and
Adam
Tygart)
Slides:
available after the Symposium
Abstract
Kansas State
University's
HPC
cluster
was running out of storage space last year.
Vendors of traditional HPC storage solutions
were either too expensive to be feasible
or
too little capacity to be of long-term use.
 The system that ended up providing
the best storage capacity
for the available budget was
Ceph,
an open-source project
that provides storage striped across
many commodity servers.
This session is a case study of
the pros and cons of
our implementation of
a 1.5 PB Ceph-based storage cluster,
discussing the history of
network-based filesystems,
including why our previous
Gluster-based
was no longer suitable.
 Questions and discussion are encouraged.
Biography
Daniel
Andresen, Ph.D.
is an associate professor of
Computing
& Information Sciences
at
Kansas
State University
and Director of the
Institute for Computational Research.
His research includes
embedded and distributed computing,
biomedical systems,
and high performance scientific computing.
Dr. Andresen coordinates the activities of
the K-State research computing cluster,
Beocat,
and advises the
local
chapter
of the
Association
for Computing Machinery
(ACM).
He is a
National
Science Foundation
CAREER
award winner,
and has been granted research funding from
the NSF,
the
Defense
Advanced Research Projects Agency
(DARPA),
and industry.
He is a member of
the
Association
for Computing Machinery,
the
IEEE
Computer Society,
the
Electronic
Frontier Foundation,
and
the
American
Society for Engineering Education.
Electronics Engineer
Innovation and
High Performance Computing Center
Tinker
Air Force Base
US
Air Force
Topic:
"Parallel Techniques for
Physics-Based Storm
Simulation and Rendering
in Real-Time Applications"
Slides:
available after the Symposium
Talk Abstract
Commercial flight simulators
are used by
the military and commercial airlines
in order to provide pilots with
regular training and evaluation.
Unfortunately,
these flight simulators
have many shortcomings
when compared to reality,
including a notable lack of
adequate weather simulation.
Every pilot has to deal
with bad weather such as
wind shears,
turbulence,
limited visibility,
and
precipitation.
Despite this,
modern commercial flight simulators
are incapable of
simulating
realistic,
physics-based weather,
and instead either rely on
artistically crafted weather
or have no weather at all.
In order to close this gap,
we utilize
modern high-performance computing
hardware and software
to enhance
a flight simulator with
physics-based weather,
allowing for
improved pilot training and evaluation.
The technologies utilized include
the
TARDIS
supercomputer
at
Tinker
Air Force Base,
OU's
Advanced
Regional Prediction System
weather model,
NVIDIA
CUDA,
OpenMP,
and the
GL
shading language.
Biography
Joseph Babb
is a Software Engineer at
Tinker
Air Force Base's
Innovation and
High Performance Computing Center.
He is currently working as
the Lead Developer on
their Flight Simulation Enhancement initiative.
Joseph graduated with his MS in
Computer Science
from
Arizona
State University
in 2013.
His research focused on
Artificial Intelligence
and
Knowledge
Representation
and resulted in his thesis entitled
"Towards Efficient Online Reasoning about
Actions"
and a number of
conference and journal publications.
While attending ASU,
he was awarded
a number of honors,
including selection for the national
Science,
Mathematics & Research for
Transformation
(SMART)
fellowship program.
Director
High
Performance Computing Center
Adjunct Associate Professor
Department
of Computer Science
Oklahoma
State University
Topic:
"What's New at OSU!"
Slides:
available after the Symposium
Abstract
Significant growth in
computational and data-intensive research
has driven investment in OSU's HPC Center.
Highlights include
two new full time staff,
a new research cloud and
a $950K+ NSF award.
Biography
Dana Brunson is Director of the
Oklahoma
State University
High
Performance Computing Center
(OSUHPCC),
Adjunct Associate Professor in the
Department
of Mathematics
and in the
Department
of Computer Science,
and co-leads the
OneOklahoma
Cyberinfrastructure Initiative
(OneOCII).
She earned her Ph.D. in
Mathematics
at the
University
of Texas at Austin
in 2005 and her M.S. and
B.S. in Mathematics from
OSU.
She is PI on OSU's 2011 and new 2015
National
Science Foundation
(NSF)
Major
Research Instrumentation
(MRI)
grants for High Performance Compute clusters
for multidisciplinary
computational and data-intensive research.
She is also co-PI on Oklahoma's
NSF
Campus
Cyberinfrastructure -
Network Infrastructure and Engineering
CC-NIE
grant,
"OneOklahoma
Friction Free Network"
(OFFN),
a collaboration among OSU,
OU,
Langston
University
and the
Tandy
Supercomputing Center
of the
Oklahoma
Innovation Institute.
Brunson became an
XSEDE
(initially
Teragrid)
Campus
Champion
in 2009.
She joined the CC leadership team in 2012.
OSUHPCC joined the
XSEDE
Federation
as a Level 3 Service Provider in 2014
and Brunson was
elected chair of the
XSEDE Level 3 Service Providers
in January 2015.
Regional Account Manager
Qumulo
Topic:
"Using Real-Time Analytics
to Better Manage Your Data
with Qumulo Core Software"
Slides:
available after the Symposium
Abstract
Join us for a 30-minute seminar with
Bob Collins
(Regional Account Manager at Qumulo)
to learn how
Qumulo's
next generation data-aware scale-out NAS
leverages its real-time analytics
to help you better manage your data.
In this seminar you will learn how to:
-
Understand your data repository
at the file level
using Qumulo Core's
real-time file system analytics.
-
Eliminate silos of storage
using a single storage namespace
for all data.
-
Achieve transparent
capacity and IO expansion
with a linear scale-out
storage architecture
-
Customize your environment
via programmable REST API
-
Optimize your storage infrastructure
for both sequential write index,
random search
as well as
hot, warm and cold data.
Biography
A seasoned IT veteran with 19+ years of
highly successful
sales and systems engineering leadership in
the storage industry,
Bob's credentials were built around
designing advanced and complex
IT datacenter architectures,
spanning a
wide set of
software and hardware technologies
from storage and networking
companies such as
EMC,
NetApp,
Brocade,
Cisco
and
Isilon,
among many others.
As the Regional Account Manager for
the Texas-Oklahoma-Louisiana-Arkansas area,
he is evangelizing
the second generation of
scale-out high performance NAS
that leverages data awareness capability
for his company,
Qumulo.
Assistant Professor
Computer
Science Department
Midwestern
State University
Topic:
"A Data Communication
Reliability and Trustability Study for
Cluster Computing"
Slides:
PowerPoint
PDF
Abstract
In High Performance Computing (HPC),
most of the problems under study will be
either embarrassingly parallel
or data dependent.
Beyond the nature of the problem,
scientists will be interested in
either one or two additional characteristics.
The first,
performance,
focuses in achieving
an accurate solution in
a fraction of the time of
a sequential approach.
The second is
consecutive, accurate and steady time readings.
In their quest for performance,
some scientists forget
not only that the chosen tool,
in many cases a distributed-memory system,
is a multi-user system,
but also that
its components are interconnected through
a high-speed communications network
to facilitate the interaction among processors.
In this talk,
we show why
a cluster characterization is relevant,
particularly for scientific kernels
where multiple
accurate and consecutive time readings
are necessary
to statistically validate a behavior.
We present the characterization of
two clusters
by using two variants of the ping pong test.
One of the clusters is
a multi-user research oriented cluster,
while the second is
a one-user cluster with older technology.
Biography
Dr. Eduardo Colmenares is
an Assistant Professor of
Computer
Science
at
Midwestern
State University.
He received his BS in
Electronics Engineering
from the
Industrial
University of Santander,
Colombia,
his Master of Science and PhD in
Computer
Science
from
Texas
Tech University,
both with Focus in
High Performance Computing and
Scientific Computing.
For his doctoral work at Texas Tech University,
Dr. Colmenares studied
a kernel of scientific relevance
in multiple fields of science,
the All-Pairs Shortest Path (APSP) problem.
He developed
an algorithmically restructured solution
for the APSP
that makes use of non-blocking features
supported by
a heterogeneous multi-core architecture,
in order to minimize the effects of
the intense data sharing among processors
and to target better performance
than the traditional and pipelined approaches.
Solutions Architect
Tesla Sales
NVIDIA
Topic:
"NVIDIA Accelerated Computing Frontiers in
HPC, Scientific Computing and Deep Learning"
Slides:
available after the Symposium
Abstract
NVIDIA Graphics Processing Units (GPUs)
are
the world's fastest and most power efficient
accelerators,
delivering world record
scientific application performance.
Learn how recent advances in
NVIDIA Tesla solutions
are enabling software developers and end users
to obtain
maximum performance and power efficiency
for their workloads.
Topics to be covered will include
a brief Tesla
High Performance Computing (HPC)
roadmap,
CUDA
and
OpenACC
updates,
a brief review of GPU enabled applications,
and
an update on
why GPUs are driving innovations in
Deep Learning.
Biography
Bob Crovella leads a technical team at
NVIDIA
that is responsible for
supporting the sales of our
GPU
Computing products
through our
Original Equipment Manufacturer (OEM)
partners and systems.
Bob joined NVIDIA in 1998.
Previous to his current role at NVIDIA,
he led a technical team
that was responsible for
the design-in support of our GPU products
into OEM systems,
working directly with
the OEM engineering and technical staffs
responsible for their respective products.
Prior to joining NVIDIA,
Bob held various engineering positions at
Chromatic Research,
Honeywell,
Cincinnati
Milacron,
and
Eastman
Kodak.
Bob holds degrees from
Rensselaer
Polytechnic Institute
(M. Eng.,
Communications and Signal Processing)
and
The
State University of NY at Buffalo
(BSEE).
He resides with his family
in the Dallas TX area.
Assistant Professor of Research
Department of Medical Informatics
School
of Community Medicine
University
of Oklahoma - Tulsa
Topic:
"Exploring Adverse Drug Effect Data
Using Apache Spark, Hadoop, and Docker"
Slides:
PDF
Abstract
Adverse drug reactions (ADRs),
a subset of the broader adverse events (AEs),
have been shown in several studies
to have a considerable burden on
healthcare costs and patient outcomes.
ADRs account for
a significant increase in patient
morbidity,
mortality,
and additional healthcare costs.
In this presentation,
we explore ADRs and AEs from
the U.S. Food and Drug Administration's
Adverse Event Reporting System
(FAERS) data set.
Using big data analysis tools from
the Hadoop ecosystem,
including
Apache Spark,
we analyze the FAERS data
and
discuss interesting trends and observations
in the 10+ year historical data set.
Biography
Dr. Nicholas Davis
is Assistant Professor of Research in
Medical Informatics at the
University of Oklahoma-Tulsa
School of Community Medicine.
He received his BS in
Computer
Science
with a minor in
Mathematics,
his MS in
Computer Science
with a focus in
Information Security,
and his PhD in
Computer Science,
all from the
University
of Tulsa
(TU).
For his doctoral work at TU,
Dr. Davis performed research in bioinformatics,
focusing on
genomic analysis of
immune response data sets
and
analysis of
fMRI
brain imaging data
to identify regions of interest.
In addition to his academic experience,
Dr. Davis has accumulated
over a decade of industry experience in
a variety of technology roles,
such as software development and architecture,
network and system administration,
and information security,
including being a
Certified
Information Systems Security Professional
(CISSP).
He is inventor on a patent for
Methods
and Systems for
Graphical Image Authentication.
His current projects include
analysis of type 1 diabetes mellitus data
to determine insulin pump settings
correlated to improved glycemic outcomes,
as well as
data mining of
clinical and claims data sets
to understand and create
predictive models of
medication adherence
across multiple dimensions.
Dr. Davis's research interests include
analysis of
electronic health record
and claims data,
data science algorithms and tools,
machine learning/statistical inference,
diabetes,
medication adherence,
integrative analysis of
heterogeneous biological data sets,
and
high performance computing.
Principal Systems Engineer
Americas Sales
Brocade
Communications Systems, Inc.
Topic:
"Software Defined Networking –
That's the answer, What's the question?"
Slides:
PowerPoint
PDF
Abstract
Oh no,
not another SDN presentation talking about
a bunch of new techie acronyms
that mean nothing to me.
Well,
there will be some of that here,
however in this presentation you'll also get
a perspective on SDN
in regard to the reality of its use.
There is no doubt that SDN
will touch every network
in some shape, form or fashion in the future.
How and to what extent will vary greatly.
This presentation will focus on
the technologies of SDN
and real life use case implementations
to solve real life issues.
Biography
Dan DeBacker,
Principal Systems Engineer,
Americas,
provides subject matter expertise in
all aspects of
Brocade's
Ethernet and Software Defined Networking
solutions.
He is engaged in
large, strategic account opportunities
offering insight to address
customer business requirements
and providing Brocade's
long term vision for data networking.
A tech veteran for more than 25 years,
Dan is valued for his communication skills,
customer-first mentality and transparency.
His vast industry experience
in dealing with large customers worldwide
enables him to help solve
complex customer needs,
create new business opportunities
and utilize skills in strategic planning,
team building and business development.
Prior to Brocade,
Dan held positions within systems engineering,
office of the Chief Technology Officer and
product/solution management at
Bay Networks / Nortel / Avaya.
Dan also held various positions
within the IT organizations of
Ford Motor Company.
Dan holds a Bachelor of Science degree in
Computer and Information Systems
as well as an
MBA
from the
University
of Michigan.
Research Assistant Professor
School
of Civil Engineering &
Environmental Science
University
of Oklahoma
Topic:
"Initial Steps to Optimizing
a Shallow-Water Model, ADCIRC,
for the Intel(R) Xeon Phi Co-processors"
Slides:
available after the Symposium
Abstract
Coming soon
Biography
Dr. Kendra M. Dresback
is a Research Assistant Professor in the
School
of Civil Engineering &
Environmental Science
at the
University
of Oklahoma.
She received her PhD in Civil Engineering at
the University of Oklahoma.
Her MS thesis investigated
a predictor-corrector time-marching algorithm
to achieve accurate results
in less time
using
a finite element-based shallow water model;
her dissertation focused on
several algorithmic improvements to
the same
finite element-based shallow water model,
ADCIRC.
She has published papers in the area of
computational fluid dynamics.
Dr. Dresback's research includes
the use of computational models
to help in the prediction of
hurricane storm surge and flooding
in coastal areas
and
the incorporation of transport effects in
coastal seas and oceans in ADCIRC.
Her research has been supported with
funding from
the
National
Science Foundation,
the
US
Department of Education,
the
Office
of Naval Research,
the
US
Department of Defense EPSCoR,
the
US
Department of Homeland Security,
NOAA
and the
US
Army Corp of Engineers.
Education, Outreach, and Training Director
National
Institute for Computational Sciences
University
of Tennessee Knoxville
Topic:
"XSEDE and its Campus Bridging Project"
Slides:
PowerPoint
PDF
Talk Abstract
We will give a brief overview of
the
NSF-funded
Extreme
Science and Education Discovery
Environment
(XSEDE)
project
and then detail
its
Campus
Bridging
effort.
Within XSEDE,
Campus Bridging
is a combination of
tools,
people,
and
technical expertise,
striving to bring resources in
data,
storage,
and
compute power
close enough to the user
so as to appear to be
peripheral devices on
their own desktop machine.
The tools and other features of this effort
do not require a connection to XSEDE;
they can be used
to increase productivity independently.
Biography
Jim Ferguson
is the Director of
Education, Outreach & Training
for the
National
Institute for Computational Sciences
(NICS)
at the
University
of Tennessee Knoxville.
His responsibilities include
coordinating a wide range of
outreach and education related activities
associated with NICS,
as well as
varied responsibilities in the
XSEDE
project
in the
Training
and
Campus
Bridging
efforts.
Jim has served on
many workshop and conference
organizing committees,
with current efforts including
the upcoming
SCxy
Conferences
and the
International
HPC Summer School
series.
Before joining NICS,
Jim's focus was programming for,
training,
and educating
users of
high performance computers and networks.
Jim's previous experience includes
positions at
Pratt
& Whitney Aircraft
and the
National
Center for Supercomputing Applications,
including significant roles in
NSF-funded projects
like the
National
Laboratory for Applied Network Research
and
Web100.
Jim is an alumnus of
Rose-Hulman
Institute of Technology.
Professor
Department
of Mathematics
Southeastern
Oklahoma State University
Topic:
"Parallel Programming in the Classroom -
Analysis of Genome Data"
(with Mike Morris)
Slides:
available after the Symposium
Abstract
Over the course of a semester,
students enrolled in
an HPC seminar class
created a suite of
human genome analysis tools
on the Beowulf clusters
that they and other students built.
The analysis tools were written with C and MPI
and subsequently interfaced with a LAMP
(Linux, Apache,
MySQL, and PHP)
website
through the use of scripts.
The output was visualized
with the help of
Google
Charts.
We will discuss
the technical details of this project
and demonstrate how these tools
can be used to analyze
multiple human genomes simultaneously.
Biography
Karl Frinkle
is an applied mathematician
who earned his PhD from the
University
of New Mexico.
He is deeply interested in
numerical simulations,
and most recently in parallel programming.
Karl joined
the SE Mathematics department in 2005,
and thoroughly enjoys teaching
parallel programming
courses
with
Mike Morris
through the CS department.
Professor of Computer Science
Tandy Professor of
Bioinformatics and Computational Biology
Tandy
School of Computer Science
University of
Tulsa
Topic:
"Building an Exotic HPC Ecosystem at
The University of Tulsa"
(with Andrew Kongs
and Peter J. Hawrylak)
Slides:
available after the Symposium
Talk Abstract
This talk covers the in-progress journey of
the
Tandy
School of Computer Science
at
The
University of Tulsa
to build
a unique
high performance computing (HPC) ecosystem
for researchers and students.
The presenters motivate and describe
the launch of
TU's initial HPC point of presence
— a traditional CPU cluster —
along with lessons learned from that process.
They also discuss ongoing work
to stand up
two distinct
heterogeneous compute node clusters
and the challenging research problems
they will be used to address.
Objectives and developments
in leveraging these HPC resources
in the classroom
will be presented.
In addition to passing along
some wisdom picked up along the way,
the presenters will reveal their plans for
the future of TU's evolving HPC ecosystem.
Biography
Dr. John Hale is a Professor of
Computer
Science
and holds the Tandy Endowed Chair in
Bioinformatics and Computational Biology
at the
University of
Tulsa.
He is a founding member
of the
TU
Institute
of Bioinformatics and
Computational Biology
(IBCB),
and a faculty research scholar in the
Institute
for Information Security
(iSec).
His research has been funded by the
US
Air Force,
the
National
Science Foundation
(NSF),
the
Defense
Advanced Research Projects Agency
(DARPA),
the
National
Security Agency
(NSA),
and the
National
Institute of Justice
(NIJ).
These projects include research on
neuroinformatics,
cyber trust,
information privacy,
attack modeling,
secure software development,
and
cyber-physical system security.
He has testified before Congress
on three separate occasions
as an information security expert,
and in 2004 he was awarded a patent on
technology he co-developed to thwart
digital piracy on file sharing networks.
In 2000,
Professor Hale earned a prestigious
NSF
CAREER
award for
his educational and research contributions to
the field of information assurance.
Assistant Professor
Tandy
School of Computer Science
Assistant Professor
Department
of Electrical and Computer Engineering
The University of
Tulsa
Topic:
"Building an Exotic HPC Ecosystem at
The University of Tulsa"
(with Andrew Kongs
and John Hale)
Slides:
available after the Symposium
Talk Abstract
This talk covers the in-progress journey of
the
Tandy
School of Computer Science
at
The
University of Tulsa
to build
a unique
high performance computing (HPC) ecosystem
for researchers and students.
The presenters motivate and describe
the launch of
TU's initial HPC point of presence
— a traditional CPU cluster —
along with lessons learned from that process.
They also discuss ongoing work
to stand up
two distinct
heterogeneous compute node clusters
and the challenging research problems
they will be used to address.
Objectives and developments
in leveraging these HPC resources
in the classroom
will be presented.
In addition to passing along
some wisdom picked up along the way,
the presenters will reveal their plans for
the future of TU's evolving HPC ecosystem.
Biography
Peter J. Hawrylak, Ph.D. (M'05)
received the B.S. degree in
Computer Engineering,
the M.S. degree in
Electrical Engineering,
and the Ph.D. in
Electrical Engineering
from the
University
of Pittsburgh,
in 2002, 2004, and 2006, respectively.
He is an Assistant Professor in the
Department
of Electrical and Computer Engineering,
with a joint appointment in the
Tandy
School of Computer Science,
at
The University of
Tulsa.
He has published more than 40 publications
and holds 12 patents
in the radio frequency identification
(RFID)
and
energy harvesting areas.
His research interests include
RFID,
security for low-power wireless devices,
Internet of Things applications,
and
digital
design.
Dr. Hawrylak is a member of the
IEEE
and
IEEE
Computer Society,
and is currently the Secretary of
the Tulsa Section of the IEEE.
He served as chair of the
RFID Experts Group
(REG)
of the
Association
for Automatic Identification and Mobility
(AIM)
in 2012-2013.
Peter received AIM Inc.'s
Ted Williams Award
in 2015 for his contributions to
the RFID industry.
Dr. Hawrylak is the Publication Co-Chair of
the
International
IEEE RFID Conference,
and is the Editor-in-Chief of the
International
Journal of
Radio Frequency Identification
Technology and Applications
(IJRFITA)
journal published by
InderScience
Publishers,
which focuses on
the application and development of
RFID technology.
System Administrator
Department of
Computing & Information Sciences
Kansas State
University
Topic:
"Big Storage, Little Budget"
(with
Dan
Andresen
and
Adam
Tygart)
Slides:
available after the Symposium
Abstract
Kansas State
University's
HPC
cluster
was running out of storage space last year.
Vendors of traditional HPC storage solutions
were either too expensive to be feasible
or
too little capacity to be of long-term use.
 The system that ended up providing
the best storage capacity
for the available budget was
Ceph,
an open-source project
that provides storage striped across
many commodity servers.
This session is a case study of
the pros and cons of
our implementation of
a 1.5 PB Ceph-based storage cluster,
discussing the history of
network-based filesystems,
including why our previous
Gluster-based
was no longer suitable.
 Questions and discussion are encouraged.
Biography
Kyle Hutson has been involved with
Linux system administration since 1994.
He received his bachelor's degree from
Kansas State
University
in
computer
engineering
in 1995.
He has worked in
non-profit,
public sector,
and
public sector IT services,
including several years as
a small business IT consultant.
Kyle joined
Kansas State University's
HPC team in 2012.
PhD Graduate Student
School
of Chemical Engineering
Oklahoma
State University
Topic:
"Thermo-physical and Structural Properties of
Imidazolium Based Binary Ionic Liquid Mixtures
from Molecular Simulation"
Slides:
available after the Symposium
Talk Abstract
Ionic liquids (ILs)
are novel chemical substances
composed entirely of ions.
Unlike common salts,
ILs can be synthesized
to exist as liquid under ambient conditions.
Many ILs do not evaporate
and hence are dubbed as
"environmentally friendly,"
making them attractive candidates for
replacement of volatile organic compounds
used in chemical industry.
ILs are also known as
"designer solvents,"
as their properties
can be fine-tuned by
varying the cations and anions independently.
The number of such possible combinations
can be increased dramatically
by forming mixtures of ILs.
In this presentation,
we report the predictions of
structural and thermo-physical properties,
obtained by
Molecular dynamics atomistic simulations
of two binary ILs over a range of temperature.
One of the binary mixtures
contained
the cation 1-n-butyl-3-methylimidazolium
[C4mim]+
while different mole fractions of
chloride [Cl]-
and
methyl sulfate [MeSO4]-
were investigated.
Another binary IL mixture was composed of
[C4mim]+
in combination with
different mole fractions of
[Cl]-
and
bis(trifluoromethanesulfonyl)imide
[NTf2]-
anions.
The mixture behavior was quantified
in terms of thermodynamic properties
such as
excess molar volume
and
excess residual enthalpy.
The observed non-ideal behavior of IL mixtures
will be explained in terms of
three-dimensional probability plots of
anion distributions
around the cation
[C4mim]+
and enhancement of local mole fraction
suggesting the manner vicinity of
cation and anion changes
by change in composition.
Also,
transport properties
like
self-diffusion coefficients
and
ionic conductivity
were predicted and reasoned
based on ion pair correlated motion.
Biography
Utkarsh Kapoor
received his Bachelor's degree in
Chemical
Engineering
from
Birla
Institute of Technology and Science
(BITS) –
Pilani, Rajasthan, India
in 2012.
Thereafter,
he worked as Process Engineer in
Grasim
Industries Ltd.
(chemical division),
Aditya
Birla Group
(ABG)
for a year and a half
with focus on manufacturing
caustic soda solution.
He was also part of
the plant commissioning team
when initially he was stationed at
ABG, sulphites division, Thailand.
He has been pursuing Ph.D. program in
School
of Chemical Engineering
at
Oklahoma
State University
since fall 2014,
with a special focus on
predicting various properties of solvents
such as ionic liquids
using the power of computational simulations.
He is a recipient of
Halliburton Graduate Fellowship
from
OSU's
College
of Engineering,
Architecture and Technology
(CEAT),
and is working as
Creativity, Innovation and
Entrepreneurship Scholar,
having received a scholarship from OSU's
Spears
School of Business
for academic year 2015-16.
He also received the
Graduate College
top-tier fellowship
for academic year 2014-15.
He is also involved as
Vice President
of
OSU
Automation Society
(OSUAS)
and
General Secretary
of OSU's
Chemical
Engineering
Graduate Student Association
(ChEGSA),
where he helps the team in
planning and organizing
various technical and social events.
Research Staff
Tandy
School of Computer Science
University
of Tulsa
Topic:
"Building an Exotic HPC Ecosystem at
The University of Tulsa"
(with John Hale
and Peter J. Hawrylak)
Slides:
available after the Symposium
Talk Abstract
This talk covers the in-progress journey of
the
Tandy
School of Computer Science
at
The
University of Tulsa
to build
a unique
high performance computing (HPC) ecosystem
for researchers and students.
The presenters motivate and describe
the launch of
TU's initial HPC point of presence
— a traditional CPU cluster —
along with lessons learned from that process.
They also discuss ongoing work
to stand up
two distinct
heterogeneous compute node clusters
and the challenging research problems
they will be used to address.
Objectives and developments
in leveraging these HPC resources
in the classroom
will be presented.
In addition to passing along
some wisdom picked up along the way,
the presenters will reveal their plans for
the future of TU's evolving HPC ecosystem.
Biography
Andrew Kongs
is Research Staff at
The
University of Tulsa.
His specialties include
prototyping,
enterprise networking,
embedded systems,
printed circuit board design
and
digital forensics.
He
designed,
built
and
manages
Anvil,
a general purpose cluster at
the University of Tulsa.
He has designed
electronics and instrumentation
for research and teaching purposes.
XSEDE
Director for
Education
and Outreach
Shodor
Education Foundation, Inc.
Blue Waters
Technical Program Manager for
Education
National
Center for Supercomputing Applications
Topic:
"Expanding Campus Engagement with XSEDE"
Slides:
available after the Symposium
Talk Abstract
A key objective of
XSEDE
is to increase research productivity
and
the preparation of the workforce
via access to advanced digital
resources and services.
Campuses are a critical component of
XSEDE's efforts to engage and support
the user community.
Through cooperation and coordination
with campuses,
the resources and services
being offered on campuses
can directly complement
those offered by XSEDE;
from deploying advanced digital resources
to providing support services
such as consulting and training.
The session will begin with
a discussion of the range of
XSEDE's
resources and services.
This will be followed by
an open discussion of
the needs and requirements of campuses,
which XSEDE can help to address.
Biography
Through his position with the
Shodor
Education Foundation, Inc.,
Scott Lathrop
splits his time between being the
XSEDE
Director of
Education
and Outreach,
and being the
Blue Waters
Technical Program Manager for
Education.
Lathrop has been involved in
high performance computing and communications
activities since 1986.
Lathrop is currently coordinating
education and outreach activities among
the
XSEDE
Service
Providers
involved in the NSF-funded XSEDE project.
He coordinates
the community engagement activities
for the
Blue Waters
project.
He helps ensure that
Blue Waters and XSEDE
education and outreach activities
are coordinated and complementary.
Lathrop has been involved in the
SC
Conference series
since 1989,
served as a member of the
SC
Steering Committee
for six years.
He was the
XSEDE14
Conference
General Chair.
Independent Researcher
Topic:
"Computing Hydrogen Ion Survival Probability:
Academy Student, Graduate Student,
and Faculty Experiences"
Slides:
available after the Symposium
Abstract
This presentation covers the experiences of a Missouri Academy
student, a Graduate Directed Project team, and Computer Science and
Physics Faculty at Northwest Missouri State University in data
management, computational science and physics while simulating firing
a Hydrogen Ion at a metal surface. Faculty involved in the project,
Drs. Chakraborty, Monismith, and Shaw, were awarded XSEDE startup and
XRAC allocations to perform over 20,000 2D simulations of firing a
hydrogen ion at various metallic surfaces at a scale of hundreths of
atomic units. Simulations in this project allowed for variations in
the trajectory model used, distance of closest approach, normal
velocity, parallel velocity, height of the potentials, width of each
potential, and distance between adjacent steps. Academy student
experiences included learning about directive based parallelism and
updating a Fortran IV/77 code to Fortran 90 and to include OpenMP
parallelism. Graduate students involved in a graduate directed
project developed two codes as part of a data management plan for the
project. The first was to upload simulation results from the TACC
Stampede supercomputer to a server at Northwest Missouri State
University to retain results in a MySQL database. The second was to
retrieve data from this MySQL database and present it in a graphical
format using a Java Swing GUI tool that produced graphical reports
using the JasperReports API. Faculty have performed significant
optimizations to the code to allow for single parameter set executions
that make use of all compute resources on a Stampede node -
asynchronous OpenMP/Xeon Phi OpenMP with 16 and 240 cores,
respectively. So far results in this project have been produced for
two metals and Drs. Chakraborty and Shaw have over 30 graphs on which
they are performing analysis. Dr. Monismith is currently performing
optimizations on a 3D version of this code on PSC Greenfield.
Biography
Dr. David Monismith is an independent researcher in the Oklahoma City
Area. He was an Assistant Professor at Northwest Missouri State
University from 2012 to 2015 where he served as XSEDE Campus Champion,
Graduate Directed Projects Coordinator, and PI on two US Army
Subcontracts. He is currently working as a Co-PI with Drs. John Shaw
and Himadri Chakraborty on an XSEDE Allocation entitled "Computational
Simulations of Electronic Motions and Excitations in Nanostructured
Surfaces by Ion-Surface and Adsorbate-Surface Charge-Transfer
Interactions". While working on this project, Dr. Monismith wrote
code that generates scripts to perform parameter sweeps on the Texas
Advanced Computing Center (TACC) Stampede Supercomputer using the TACC
Launcher. He also worked with Yixiao (Icy) Zhang, a Northwest Academy
student, to help her parallelize the code using OpenMP. Dr. Monismith
later updated the code to make use of Xeon Phi Accelerators, and
worked with a graduate student team to develop tools to save results
to a database and generate graphs from those results. Dr. Monismith
is currently performing reviews, parallelization, and optimization on
the code for this project. Additionally, Dr. Monismith is performing
pro bono work on the Scholar-Link project with the Community
Foundation of Northwest Missouri. Scholar-Link has been a graduate
directed project at NWMSU since 2012. It enables students in
Northwest Missouri to easily access and apply for hundreds of
scholarships offered through the Community Foundation using a single
scholarship application.
Assistant Professor
Department
of Chemistry, Computer and Physical
Sciences
Southeastern
Oklahoma State U
Topic:
"Parallel Programming in the Classroom -
Analysis of Genome Data"
(with Karl Frinkle)
Slides:
available after the Symposium
Abstract
Over the course of a semester,
students enrolled in
an HPC seminar class
created a suite of
human genome analysis tools
on the Beowulf clusters
that they and other students built.
The analysis tools were written with C and MPI
and subsequently interfaced with a LAMP
(Linux, Apache,
MySQL, and PHP)
website
through the use of scripts.
The output was visualized
with the help of
Google
Charts.
We will discuss
the technical details of this project
and demonstrate how these tools
can be used to analyze
multiple human genomes simultaneously.
Biography
Mike Morris' degrees are in math,
but he has always said
he wound up on the business end of a computer.
He taught Computer Science (CS)
in the early 80s
after working as
an Operations Research Analyst for
Conoco
in Ponca City OK.
Mike left teaching and spent 15 years
doing various things in the CS industry
before returning to
Southeastern Oklahoma State
to once again teach CS,
where he remains today.
Research Assistant
Aerospace
Engineering
Computational Fluid Dynamics Laboratory
Wichita
State University
Topic:
"Peformance Tuning and Optimization of
a Hybrid MPI+OpenMP Higher Order
Computational Fluid Dynamics Solver"
Slides:
available after the Symposium
Talk Abstract
An in-house
Computational Fluid Dynamics
(CFD)
numerical solver with
a higher order
Weighted Essentially Non-Oscillatory
(WENO)
Scheme
for the
incompressible Navier-Stokes equations
are developed
and hybrid parallel implementations
utilizing MPI and OpenMP are implemented.
Here,
we focus on the approaches for
performance analysis,
enhancements of the above solver
and evaluate their results.
By careful implementation of
offloading the OpenMP constructs to
Intel Xeon Phi co-processors,
non-blocking MPI communications calls
to overcome the communication overhead
and constructing advanced derived data types
for non-contiguous data,
we have achieved a strong scaling of
75x speed-up in 64 cores.
We also highlight
the other key improvements and optimizations
utilized to achieve these results.
Performance tuning is approached on
four fronts:
MPI routines,
OpenMP offloading,
cache optimizations
and
Intel Math Kernel Libraries (MKL)
high performance libraries.
Though it is tedious to refactor
the tightly coupled algorithms,
these improvements enable us
to execute larger and accurate
simulations
to take advantage of
the Many Integrated Core (MIC)
architectures of modern HPC
as well as
large scale distributed memory computing.
Biography
Mukundhan Selvam
holds a M.S. degree in
Aerospace
Engineering
from
Wichita
State University,
and earned his B.E. degree in
Aeronautical Engineering
from
Anna
University,
Chennai, India.
Mr. Selvam's research interests are
computational fluid dynamics,
high performance computing,
distributed computing,
scientific computation,
numerical turbulence modeling
and
performance tuning
for massively parallel scientific software.
Mr. Selvam also holds
a Research position at the
National
Institute for Aviation Research
(NIAR)
in
Wichita KS,
where he has lead teams and projects in
developing automated systems software
in python,
data analysis,
testing/certification of
advanced aircraft materials and
tweaking finite element numerical analysis
to adapt it to our simulations.
Recent work has been focused on
performance approaches for
improving speedup and efficiency
of a hybrid CFD solver in HPC clusters.
He has been recognized for
his excellence and research contributions in
the Aerospace field by the
American honor society in
Aerospace Engineering -
Sigma Gamma Tau in 2015.
Through his persistent passion towards
HPC and fluid dynamics,
he has published and
presented technical papers at
conferences like AIAA.
He is currently seeking to continue
his career/research in
scientific simulation's
software development and HPC.
Regional Sales Director
Mellanox
Technologies
Topic:
"New Era of Performance through Co-Design"
(with
second speaker)
Slides:
PDF
Abstract
Mellanox
InfiniBand
technology
is the foundation for
scalable and performance demanding
computing infrastructures.
Delivering more than 100Gb/s throughput,
sub 700ns application to application latency
and
message rates of
150 million messages per second
has already placed
ConnectX-4
EDR 100Gb/s
technology in the
Top500
list
of the world's most powerful and efficient
supercomputers.
We will discuss
the latest interconnect advancements
that maximize application performance
and scalability
through the concept of co-design,
an industry driven concept
that accelerates the path to
Exascale.
Biography
Mr. D. Kent Snider
currently holds the position of
Director, Central US Sales
for
Mellanox
Corporation.
His responsibilities include
direction of all
sales,
engineering,
support
and
demand generation activities
for the Central US Region.
Mr. Snider has over 15 years in
the high technology industry in various
sales,
sales management
and
consulting roles.
Mr. Snider has broad experience in
the IT industry including
networking,
HPC,
storage infrastructure,
managed services and
IT contract consulting.
His assignments have covered
many vertical markets
(Oil & Gas,
Media,
Entertainment,
Engineering,
Manufacturing
and
Health Services),
working for
NetApp,
Gartner
Consulting
and
EMC.
He hold a BS degree in Business from
Ball
State University
and is a graduate of the
University
of Pennsylvania
Wharton
School of Executive Education.
Principal Engineer
Open Networking Product Group
Dell
Inc.
Topic:
"Open Networking and HPC"
Slides:
available after the Symposium
Abstract
The new style of Web-scale IT
that is run in hyperscale organizations like
Google,
Facebook
and
Amazon
has changed the paradigm for
delivery of IT services.
Mainstream enterprise organizations
are now attempting to deliver
increased agility,
improved management
and/or
reduced cost
for their constituents.
Over the past 12 months,
vendors have continued to leverage
merchant-based silicon
within their switching portfolios.
Thus,
differentiation between vendor solutions
continues to shift toward software
(including
management,
provisioning,
automation
and
orchestration),
with hardware capabilities
(such as
bandwidth,
capacity
and
scalability)
becoming more standardized.
We will cover what Dell doing to lead the way in Open Networking and how HPC customers can take advantage and leverage Open Networking in their deployments.
Biography
DJ Spry
is a Network Engineer with
over 18 years experience
designing and operating
secure large-scale
campus,
data center,
and
service provider
networks.
Most recently he is concentrating on
cloud,
Software Defined Networking (SDN),
and
evangelizing the value of
Open Networking for
data center,
big data,
and
cloud deployments.
Prior to joining
Dell,
DJ was a Consulting Engineer for
Juniper
Networks
focusing on
Federal,
Department of Defense,
and
Intelligence Community
customers.
In addition,
he is a
United
States Air Force
veteran.
Executive Director
Texas
Advanced Computing Center
The University
of Texas
Topic:
"Data in a Flash:
Next Generation Architectures
for Big Data in Supercomputing
—
the Wrangler project and what comes next"
Slides:
available after the Symposium
Talk Abstract
Coming soon
Biography
Dan Stanzione is the Executive Director of the
Texas
Advanced Computing Center
(TACC)
at
The
University of Texas at Austin
and the Principal Investigator for
Wrangler.
He is also the PI for TACC's 10 PetaFlop
Stampede
supercomputer,
and has previously been involved in
the deployment and operation of the
Ranger
and
Lonestar
supercomputers at TACC.
He served as the Co-Director of
The
iPlant Collaborative,
an ambitious endeavor to build
cyberinfrastructure to address
the grand challenges of plant science.
Prior to joining TACC,
Dr. Stanzione was the founding director of the
Ira A. Fulton
High Performance Computing Institute
(HPCI)
at
Arizona
State University (ASU).
Before ASU,
he served as an AAAS Science Policy Fellow
in the
National
Science Foundation
and as a research professor at
Clemson
University,
his alma mater.
Senior Systems Engineer
Arista
Networks
Topic:
"The Value and Future of Ethernet in HPC"
Slides:
available after the Symposium
Talk Abstract
This presentation
will discuss how
Ethernet and its promising future
continue to be
used as a high-speed interconnect for
most of the commercial HPC clusters
in use today.
With the arrival of
PCIe
4.0,
we will enable
100 Gbps connections to the server.
Ethernet now offers
25 Gbps and 50 Gbps connections based on
25 Gbps lanes,
an 100 Gbps Ethernet has been shipping in mass
for over two years.
The combination of
these enhancements and silicon port density
will significantly drive down
cost of the network per GB of throughput.
Furthermore,
there is 400Gbps Ethernet on the horizon.
Ethernet is leading the innovation for
high-speed interconnect technology
compared to other transport solutions.
Innovations such as
Remote
Data Memory Access
over Converged Ethernet
(RoCE),
iWARP,
and support for kernel bypass drivers
make Ethernet comparable,
makes Ethernet advantageous over
other high-speed interconnect technologies.
Biography
Mickey Stewart has
more than 20 years experience in
computing and network technologies.
He works as a senior systems engineer for
Arista Networks,
specializing in
Data Center and High Performance Computing
architectures
using
the highest performance Ethernet switches and
the most modern and advanced
network operating system,
Arista
EOS.
He has held various
systems engineering,
solutions architecture
and
business roles.
Mickey has expertise in servers,
routing and switching,
unified communications,
network and information security,
storage and optical networking.
Mickey holds/has held
many industry certifications such as
CCIE,
CISSP,
CCDP,
CNE
and
CNX.
Senior Computer Information Specialist
Department of
Computing & Information Sciences
Kansas State
University
Topic:
"Big Storage, Little Budget"
(with
Dan
Andresen
and
Kyle
Hutson)
Slides:
available after the Symposium
Abstract
Kansas State
University's
HPC
cluster
was running out of storage space last year.
Vendors of traditional HPC storage solutions
were either too expensive to be feasible
or
too little capacity to be of long-term use.
 The system that ended up providing
the best storage capacity
for the available budget was
Ceph,
an open-source project
that provides storage striped across
many commodity servers.
This session is a case study of
the pros and cons of
our implementation of
a 1.5 PB Ceph-based storage cluster,
discussing the history of
network-based filesystems,
including why our previous
Gluster-based
was no longer suitable.
 Questions and discussion are encouraged.
Biography
Adam Tygart
has been an HPC system administrator
since 2008.
He has been using Linux since high school.
Beocat,
Kansas State University's
Gentoo-based
HPC cluster,
was implemented in its current form by
Adam while still an undergraduate.
Regional Scale out Storage Director
Quantum High Performance Storage
Quantum
Topic:
"High Performance and Long Term Retention
That Doesn't Break the Bank"
Slides:
PowerPoint
PDF
Talk Abstract
Quantum's
tiered storage approach to
high demand environments
is the core of our business.
We help organizations
deliver faster,
retain longer,
and
maximize the value of
their data storage infrastructure,
placing data on the right tier of storage
at the right time,
based on its requirements.
Biography
With over 16 years of experience in technology,
Neal Wingenbach has had significant exposure to
high performance/high demand environments.
Quantum's approach to tiered storage
in the Supercomputing market is unquestioned.
Whether presenting geospatial data to
NASA,
streaming online content to network broadcasts,
or
crunching genomic sequence data,
Quantum's expertise is delivering
data based on the value to "the business."
As analytics becomes
more and more dependent on trends,
the need for data retention has grown.
The ability to present data at the right time,
at the right performance,
is where Quantum is driving value
in our customers.
Assistant Professor
Department
of Business and Computer Science
Southwestern
Oklahoma State University
Topic:
"Cloud Computing"
Slides:
available after the Symposium
Talk Abstract
Cloud computing is an increasingly important solution for providing services deployed in dynamically scalable cloud networks. Services in the cloud computing networks may be virtualized with specic servers which host abstracted details. Some of the servers are active and available, while others are busy or heavy loaded, and the remaining are ofine for various reasons. Users would expect the right and available servers to complete their application requirements. Therefore, in order to provide an effective control scheme with parameter guidance for cloud resource services, failure detection is essential to meet users' service expectations. It can resolve possible performance bottlenecks in providing the virtual service for the cloud computing networks. Most existing Failure Detector (FD) schemes do not automatically adjust their detection service parameters for the dynamic network conditions, thus they couldn't be used for actual application.
This presentation explores FD properties with relation to the actual and automatic fault-tolerant cloud computing networks, and proposes three general analysis methods to satisfy user requirements. Based on these general methods, we propose some special and dynamic Failure Detector, as a major breakthrough in the existing schemes. We carry out actual and extensive experiments to compare the quality of service performance between our FDs and several other existing FDs. Our experimental results demonstrate that our scheme can adjust FD control parameters to obtain better services. Such FDs are adjusted in IBM cloud computing platform, and hope to be extensively applied to industrial and commercial usage.
Biography
Neal N. Xiong is current a faculty at Dept. of Business and Computer Science (BCS), Southwestern Oklahoma State University (SWOSU), OK, USA. He received his both PhD degrees in Wuhan University (about software engineering), and Japan Advanced Institute of Science and Technology (about dependable networks), respectively. Before he attends SWOSU, he worked in Colorado Technical University, Wentworth Technology Institution, and Georgia State University for many years. His research interests include Cloud Computing, Business Networks, Security and Dependability, Parallel and Distributed Computing, and Optimization Theory.
Dr./Prof. Xiong published over 200 international journal papers. He served as an Editor-in-Chief, Associate editor or Editor member for over 10 international journals (including Associate Editor for IEEE Tran. on Systems, Man & Cybernetics: Systems, Associate Editor for Information Science, Editor-in-Chief for Journal of Internet Technology (JIT), and Editor-in-Chief for Journal of Parallel & Cloud Computing (PCC)). Dr./Prof. Xiong is the Chair of
"Trusted
Cloud Computing" Task Force,
IEEE Computational Intelligence Society
(CIS),
and the
Industry
System Applications Technical Committee,
He is a Senior member of IEEE Computer Society.
OTHER
BREAKOUT SPEAKERS TO BE ANNOUNCED