Skip to main content

News & Events

Events

 

RSS

November 27, 2017, 12:00 PM
Zhijia Zhao: Automata-Centric Parallelization for Scalable and Parallel Data Processing

[Monday, November 27, 2017 at 12:00 PM in Wegmans 1400] ABSTRACT:
Automata are not only fundamental to theoretical computer science, but also practically used in many basic data processing routines, such as parsing, searching, data decoding, and querying. Since such routines commonly appear on the critical path to user's response, their performance critically affects the overall responsiveness of software systems.

On the other hand, the parallel computing hardware has become prevalent, from high-end rack servers to portable mobile devices such as smartphones and tablets. To efficiently harness the growing hardware parallelism, computations need to expose sufficient and effective software parallelism.

However, by nature, automata-based applications can only execute sequentially, conceptually from the left end of the input sequence to the right. The serial behavior prevents automata applications from taking advantages of the growing hardware parallelism. In this talk, I will introduce a set of parallelization techniques designated for automata applications and argue that such “inherently sequential” computations can actually run effectively in parallel. By centering around such basic computation models -- automata, we aim to make the parallelization solutions generally applicable to a wide range of serial applications.

BIO:
Zhijia Zhao is an assistant professor in the Computer Science and Engineering Department at University of California, Riverside (UCR). He obtained his Ph.D. degree in Computer Science from The College of William and Mary in 2015, his master and bachelor degrees from Harbin Institute of Technology in China in 2007 and 2009, respectively. His general research interest is about programming system and parallel computing. Specifically, he is interested in program optimization, parallelization, and reliability analysis on multicore and heterogeneous computing platforms, especially for automata-based applications (e.g., searching, parsing, querying, and decoding) and applications with irregular or nested data structures (e.g., tree, graph, and semistructured data). In addition, he is also interested in mobile app analysis and optimizations on Android platform.


December 4, 2017, 01:00 PM
Richard Lange: Neural Networks for Variational Inference

[Monday, December 04, 2017 at 1:00 PM in Meliora 366] Learning and inference in probabilistic (and generative) models are at the heart of many Machine Learning problems. As data and models grow more complex, approximate methods for both become a necessary component of any ML toolbox. Variational approximations work by replacing a complicated probability distribution with a simple one, minimizing some metric of their difference. Viewing this metric as a “loss function,” researchers have now begun to leverage the ability of neural networks to minimize arbitrary loss functions as a way to train complicated probabilistic models. This framework allows fitting, by gradient-based methods, both generative models and so-called “recognition” models that learn to map directly from inputs to posterior approximations. To some, this framework represents a new philosophy on generative models in which the relationship between a latent variable and observed data may be a black box. While this is useful for modeling complicated or nonlinear processes, the inclusion of domain knowledge as additional structural constraints often results on models that are not only more interpretable, but also more accurate. I will review the concepts and techniques of this framework as well as give in-depth examples of successful published models.


December 8, 2017, 12:00 PM
Michael Ignatowski: Developments in Advanced Memory Systems and Processing-in-Memory

[Friday, December 08, 2017 at 12:00 PM in Goergen 109] Abstract:
Processing-in-Memory (PIM) as a research topic has been around since at least the 1990's. Recent developments in stacked DRAM technology, new interfaces, and new programming models have significantly increased the interest in PIM recently, and we expect to see the first commercial PIM devices in the coming years. This talk will outline the recent developments in PIM and well as the general class of "intelligent memories". It will also review the other new disruptive memory technologies being developed, including 3D stacking, NVRAM, new interface standards such as Gen-Z and CCIX.

Bio:
Michael Ignatowski is a Senior Fellow at Advanced Micro Devices (AMD) in Austin Texas where he leads AMD's research efforts on advanced memory systems. His current research work focuses on processing-in-memory, exploiting emerging NVRAM technologies, and reconfigurable computing. He was the Principle Investigator for AMD’s FastForward exascale memory research. Prior to joining AMD in 2010, Michael worked for 27 years at IBM leading work on performance analysis and memory hierarchy designs for S390 mainframes, Power systems, and SP supercomputers. In 2008 he joined the IBM Watson Research lab for to work on 3D chip stacking technologies. He currently holds over two dozen patents related to memory systems, as well as a number of patents pending on processing-in-memory designs. Michael received an MS degree in Computer Engineering from the University of Michigan, and a BS in physics from Michigan State University. He currently has a daughter attending RIT.


December 11, 2017, 12:00 PM
Adriana Kovashka: Towards Human-like Understanding of Visual Content: Facilitating Search and Decoding Visual Media

[Monday, December 11, 2017 at 12:00 PM in Wegmans 1400] ABSTRACT:
In the first part of this talk, I will describe our work on interactive image search. We introduced a new form of interaction for search, where the user can give rich feedback to the system via semantic visual attributes (e.g., "metallic", "pointy", and "smiling"). The proposed WhittleSearch approach allows users to narrow down the pool of relevant images by comparing individual properties of the results to those of the desired target. Building on this idea, we develop a system-guided version of the method which engages the user in a 20-questions-like game where the answers are visual comparisons. To ensure that the system interprets the user's attribute-based feedback as intended, we further show how to efficiently adapt a generic model for an attribute to more closely align with the individual user's perception. Our work transforms the interaction between the image search system and its user from keywords and clicks to precise and natural language-based communication. We demonstrate the impact of this new search modality for effective retrieval on databases ranging from consumer products to human faces. This is an important step in making the output of vision systems more useful, by allowing users to both express their needs better and better interpret the system's predictions.

In the second part of my talk, I will discuss two recent projects on using computer vision to analyze images in the media, which often have persuasive intents that lie beyond the physical content. As a first step in understanding persuasion in the visual media, we released a dataset of 64,832 image ads, and a video dataset of 3,477 ads, containing rich annotations about the subject, sentiment, and rhetoric of the ads. The key task we focus on is the ability of a computer vision system to answer questions about the actions the viewer is prompted to take and the reasoning that the ad presents to persuade the viewer. To help perform this task, we focus on two challenges: decoding the symbolic references that ads make (e.g. a dove symbolizes peace), and recognizing objects in the severely non-photorealistic portrayals that some ads use. In a second media understanding project, we develop a method that captures photographers’ styles and predicts the authorship of artistic photographs. To explore the feasibility of current computer vision techniques to address photographer identification, we create a new dataset of over 180,000 images taken by 41 well-known photographers. We examine the effectiveness of a variety of features and convolutional neural networks for this task. We also use what our method has learned to generate new “pastiche” photographs in the style of an author.

BIO:
Adriana Kovashka is an Assistant Professor in Computer Science at the University of Pittsburgh. She received her PhD in 2014 from The University of Texas at Austin. Her research interests primarily lie in computer vision, with some overlap in machine learning, information retrieval, natural language processing, and human computation. Her work is funded by two NSF grants and a Google Faculty Research Award. Her research has been published in the top computer vision conferences, such as Computer Vision and Pattern Recognition (CVPR) and the International Conference on Computer Vision (ICCV), as well as the annual conference of the Association for Computational Linguistics (ACL). She has served as Area Chair for CVPR 2018, Tutorial Chair for WACV 2018, and Doctoral Consortium Chair for CVPR 2015-2017.


December 20, 2017, 01:30 PM
Joseph Izraelevitz: Concurrency Implications of Nonvolatile Byte-Addressable Memory

[Wednesday, December 20, 2017 at 1:30 PM in Goergen 109]

In the near future, storage technology advances are expected to provide nonvolatile byte-addressable memory (NVM) for general purpose computing. These new technologies provide high density storage and speeds only slightly slower than DRAM, and are consequently presumed by industry to be used as main memory storage. We believe that the common availability of fast NVM storage will have a significant impact on all levels of the computing hierarchy. Such a technology can be leveraged by an assortment of common applications, and will require significant changes to both operating systems and systems library code. Existing software for durable storage is a poor match for NVM, as it both assumes a larger granularity of access and a higher latency overhead.

Our thesis is that exploiting this new byte-addressable and nonvolatile technology requires a significant redesign of current systems, and that by designing systems that are tailored to NVM specifically we can realize performance gains. This thesis extends existing system software for understanding and using nonvolatile main memory. In particular, we propose to understand durability as a shared memory construct, instead of an I/O construct, and consequently will focus particularly on concurrent applications.

The work covered here builds theoretical and practical infrastructure for using nonvolatile main memory. At the theory level, we explore what it means for a concurrent data structure to be “correct” when its state can reside in nonvolatile memory, propose novel designs and design philosophies for data structures that meet these correctness criteria, and demonstrate that all nonblocking data structures can be easily transformed into persistent, correct, versions of themselves. At the practical level, we explore how to give programmers systems for manipulating persistent memory in a consistent manner, thereby avoiding inconsistencies after a crash. Combining these two ideas, we also explore how to compose data structure operations into larger, consistent operation in persistence.