Title:
Intelligent Integration of Multimodal Data for Clinical Decision Support
Abstract:
For many diseases and illnesses, the analysis of individual data modalities such as imaging or electronic health records alone is insufficient for accurate modeling - only through the integration and processing of all salient sources of information can a model be created that produces reliable clinical recommendations. This makes clinical decision support a rich area for the development of novel machine learning and data science methodologies.
In this presentation I will provide an overview of multimodal data analysis along with examples where this approach was used in clinical applications, including postoperative cardiac care and heart failure. Though the developed techniques were motivated by clinical problems, the methodologies are broadly applicable to many machine learning and data science tasks.
Title:
Convergence rate analysis and improved iterations for numerical radius computation
Abstract:
For the discrete-time dynamical system $x_{k+1} = Ax_k$, the spectrum of $A \in \mathbb{C}^{n \times n}$ tells us about the asymptotic behavior of the system, but it often does not capture information about the transient behavior. To assess this, i.e., how large may $\|A^k\|_2$ become for intermediate values of $k$, we must turn to other quantities. One possibility is the numerical radius, which is the modulus of a globally outermost point in the field of values of a matrix. In this talk, we consider two very different existing approaches to computing the numerical radius, and via new analyses, show that it is actually better to combine them in a new hybrid algorithm compared to using either by itself.
Title: Fragile Complexity of Comparison-Based Algorithms
Abstract:
We initiate a study of algorithms with a focus on the computational
complexity of individual elements, and introduce the fragile complexity of
comparison-based algorithms as the maximal number of comparisons any
individual element takes part in. We give a number of upper and lower
bounds on the fragile complexity for fundamental problems, including
Minimum, Selection, Sorting and Heap Construction.
The results include both deterministic
and randomized upper and lower bounds, and demonstrate a separation
between the two settings for a number of problems. The depth of a
comparator network is a straight-forward upper bound on the worst case
fragile complexity of the corresponding fragile algorithm. We prove that
fragile complexity is a different and strictly easier property than the depth of
comparator networks, in the sense that for some problems a fragile complexity equal
to the best network depth can be achieved with less total work and that
with randomization, even a lower fragile complexity is possible.
Title: Design of a Low-Power Wide-Area Network over White Spaces
Abstract:
The Internet of Things (IoT) applications, such as sensing and monitoring, smart agriculture, and smart cities, aim to utilize IoT devices (i.e., sensors) to enhance the quality of life, health, and safety of communities in both urban and rural areas. Due to the growing demand for these applications, the number of IoT sensors is increasing rapidly and is expected to reach approximately 29 billion by 2030. IoT sensors are typically battery-powered and dispersed widely (e.g., in thousands) over long distances. It thus becomes extremely challenging to connect and coordinate them for periodic or sporadic data collection at a BS (base station) and make time-critical data-driven decisions. In this talk, I will discuss the design, implementation, and deployment experiences of a novel low-power wide-area network technology called SNOW (sensor network over white spaces), which can connect and coordinate thousands of sensors and enable energy-efficient and low-latency data collection at a BS.
Title: Making Progress on Language Learner Educational Application Tasks
Abstract:
In this talk, I will describe our recent work on two educational application tasks for language learners: Grammatical Error Correction (GEC) and developing cloze exercises. Standard evaluations of GEC systems make use of a fixed reference text generated relative to the original text. We study the performance of GEC systems relative to closest gold – a gold reference created relative to the output of a system. Evaluation with closest golds reveals that the real performance is 20-40 points better than standard evaluations show, however, state-of-the-art systems prefer to make local spelling and grammar edits, leaving out more complex word-level changes. We propose a novel method to address this deficiency of GEC models.
The second part of the talk will focus on generating cloze exercises through back-translation. In a cloze exercise, a student is presented with a carrier sentence with one word hidden, and a multiple-choice list that includes the correct answer and several inappropriate options, called distractors. We use hundreds of back-translations of the carrier sentence via multiple pivot languages to generate a rich set of challenging distractors. We demonstrate that the proposed method significantly outperforms current state-of-the-art.
Title: Machine learning for eye treatment and intelligent tutoring
Abstract: In this talk, I will share our recent research progress on two problems: