It is my second time submitting a VIS paper. This year I visited NYU and coauthored the paper with Prof. Bertini. It’s really exciting to work with Enrico and we are satisfied with what we have done so far. Hope there would be other opportunities to collaborate with him. Though it’s not my first time working on the VIS deadline, this is still a fresh experience and I learned a lot from it.
Finally not a student anymore! Now I can use “Ph.D. candidate” in my CV. That is the first thought came to my mind after I got passed the PQE. I have to throw myself into bed for two days. That is my second immediate thought. Generally speaking, my PQE is far from a good one. The time management is like a disaster. Luckily, the oral presentation is finished as expected.
This year, VIS was held in Phoenix, Arizona. As the name implies, Phoenix is a hot and dry city located in the Sonoran Desert.
This is my second year attending VIS. Unlike last year, I am able to present a conference paper on VAST this year. This is really a great experience for me. More importantly, it’s really nice to have the opportunity to learn what others are doing in this community.
This is a paper list that I summarized for my PQE.
Interpretable Machine Learning for Complex Systems - A Workshop in NIPS 2016
I have been reading papers and articles and searching for ideas of my Ph.D. Qualification Exam (PQE) for a few days. Since I am interested in working on the interdisciplinary field of Visualization and Machine Learning, the idea of “explainable AI” (XAI) seems promising to me. After discussing with my professor, I decided to fixed the survey topic to “Visualization for Explainable Machine Learning”. This blog summarizes my understanding on the motivation, scope and application of XAI.
This paper is written by Amershi, a researcher in MSA, who is kind of a leading researcher in the crossing area of ML + HCI.
I am working closely on RNNs these days, trying to reveal the ``black box’’ and see what RNN learned to use its the hidden states and gates.
After intensively trying to do experiments, I suddenly realize that maybe first analyze them mathematically would give some clues for better visualization
There are already many good articles introducing RNNs and its variants (LSTMs, GRU) on the internet right now. So this is just a post for myself to summarize things up on RNNs.
So first, what is Recurrent Neural Network (RNN)?
In short, RNN is a type of neural network that deal with sequence data. Classical neural networks, e.g. Multi-layer Perceptron (MLP) or Convolutional Neural Network (CNN) takes a fixed sized input an produce a fixed size output. Although for CNNs you can resize images of different size into a standard size so that the model can work with variable size input, but for the CNN part it still only accept fixed sized input.