- University of Texas at Austin
- Ph.D. Candidate in Department of Physics
- Center for Theoretical and Computational Neuroscience
- Contact Information
- luyan.yu [at] utexas.edu
- NHB 4.362, 100 E 24TH ST
- Austin, Texas 78712, USA

The virtuosity of the spiking neural networks in biological brains have always been intriguing to people. Truly, a single neuron is easy to understand. However, when putting an astronomical amount of them together and they start to talk to each other, the complexity quickly go beyond control.

To pave the way of gaining more understanding of such networks, this project focuses on the stochastic modeling of the network. The goal is to develop efficient algorithms that can compute the statistical quantities of any given network without performing simulations.

Phylogenetic tree is widely used in many fields especially bioinfomatics. The evolution history manifested in the trees are powerful tools in the study of the migration of animals, spread of viruses and so on.

How to reconstruct **efficiently** and **effectively** such trees from real world data has been an problem of significant importance. This project utilized the tools from tropical geometry to study the structures of the space consisted of the phylogenetic trees.

Secure Computation of Deep Networks

Deep neural network is perhaps the most popular phrase nowadays. The world is changing because of its power, however, security and privacy issues have come to people’s attention. The possibility of leaking private user or server data is intolerable in financial, medical or other sensitive fields.

There are two stages when considering the security issue — training stage and evaluation stage. In this project, we deal with the latter. The security strategy we use and improve upon is the integer vector homomorphic encryption scheme proposed by Zhou and Wornell.

Classification with Tensor Networks

Tensor networks are widely used in quantum physics for the finding the ground state of a quantum system using variational methods. It reduces the unconquerable large parametric space into tractable small parts, usually with some *interpretable structures* and therefore enables us to develop efficient numerical algorithms to minimize them.

Not coincidentally, the core of machine learning is all about minimization. Thus, there is a natural motivation for us to borrow the ideas from tensor networks and apply it into machine learning. In this project, we explore the possibility of applying one of the most well known tensor network, matrix product state, to classification problems.

The idea of quantum Boltzmann machine is straight-forward: simply replace the hidden and visible layers with the quantum Pauli spins. But doing so will make the problem computationally intractable on a classical computer due to the exponentially large state space. Unless we have a real quantum computer, we will not be able to train the Boltzmann machine.

Instead, if we only quantize the hidden unit layer and keep the visible layer classical, we avoid intractable computations and meanwhile *‘steal’* some benefits from the quantum world. One such benefit we observe is that it can somehow avoid local minima during the searching of state space if we apply the quantum restricted Boltzmann machine to reinforcement learning tasks.

Topological Transitions Induced by Antiferromagnetism in Topological Insulator

The contact surface formed by stacking an antiferromagnetic (AFM) and a topological insulator (TI) thin film can be magnetized by external magnetic field. The required strength for flipping the magnetization is determined by the interfacial coupling strengths of specific samples and may vary due to impurity or disorder.

Therefore, in an AFM/TI/AFM trilayer structure, we expect an antisymmetric magnetoresistance spikes due to the **unsynchronized** magnetic switchings. This is confirmed both theoretically and experimentally.

For more details please refer to this paper.

Localization in Quantum Random Walk

Quantum random walk exhibits many unique phenomena compare with the classical version, showing the bizarreness of the quantum world. One such intriguing phenomenon is called *localization*.

In classical case, the probability of finding the random walker dissipates and vanishes as time passes by. However, in a quantum world, there remains a **finite probability** of finding the walker at a certain point no matter how long it goes. In this project, we investigated the localization phenomenon on a honeycomb lattice.

More details can be found in this published paper.

Quantum random walk is the analog to the classical random walk. In quantum random walk, many interesting phenomena distinguished from its classical counterpart can be found and studied. Thus, a comprehensive software package for the simulation of quantum random walk is needed.

We developed the **Second Quantization Quantum Walk** (Github) package in Mathematica that supports symbolic specification of the quantum walker in the form of second quantization ladder operators.

It supports symbolic Wick expansion of operators and automatic Hamiltonian construction. *GPU acceleration* is supported on machines with CUDA installed. Various of types of auxiliary functions are defined for postprocessing of simulation data.

(The code snippet here is a complete program for simulating a 2D quantum walk.)