For this robotics project, we want to derive a robot that can sort out a set of messy objects by their color. To fulfill this functionality, we spent lots of time in tuning the color detection parameters on Robotics Operation System (ROS). Besides that, it is also very challenging for robot to work straight, because
Category: Fall (2016)
Probabilistic Reasoning and Learning Homework 8
In the last homework, I an calculating the results for a real application with both value iteration and policy iteration. In that application, I need to find a way to derive an algorithm to leave a dungeon made by obstacles `#`. Other than that, convergence for iterative policy evaluation is calculated. Codes are also attached
Continue Reading “Probabilistic Reasoning and Learning Homework 8”
Probabilistic Reasoning and Learning Homework 7
The seventh homework is about Viterbi algorithm, Hidden Markov Model, and Mixture Model. The updates and inference procedures for the above-mentioned algorithms are derived and written in the report. Codes are also attached.
Probabilistic Reasoning and Learning Homework 6
In this homework, I am working on Expectation Maximization (EM) Algorithm and Auxiliary function. Handwritten answers and codes are attached in the report.
Probabilistic Reasoning and Learning Homework 5
This homework is mainly about gradient based learning, either first-order gradient or Newton’s Method. I am asked to derive the converging speed and error bounds. After that, two real applications — stock market prediction and handwritten digit classification problems — are given for us to practice. The experiment results and codes are written in report.
Probabilistic Reasoning and Learning Homework 4
In this homework, I need to work on maximum likelihood estimation in different circumstances. For the third part of the homework, I am asked to work on calculating the maximum likelihood estimation of unigrams and bigrams for a real dataset. Codes are attached in the report.
Probabilistic Reasoning and Learning Homework 3
For third homework, I am still using the basic probabilistic knowledge to do inference in either a chain or polytrees. Codes are included in report.
Probabilistic Reasoning and Learning Homework 2
In the second homework, I am first required to work on probabilistic inference, probabilistic reasoning, sigmoid function. Later, I need to simplify a given probabilistic formula based on conditional independence between variables, including three independent rules and Markov Blanket rule.
Probabilistic Reasoning and Learning Homework 1
First homework in this course. It is about the basic probability knowledge, such as conditional probability, Bayesian rule, entropy, Kullback-Leibler (KL) distance, Mutual Information. Codes are attached in report.
Computer Vision I Homework 4
For the written part, one is asked to work on nearest neighbor algorithm, and Principle Component Analysis (PCA). Later on, naive recognition, k-Nearest Neighbor (k-NN) recognition, Eigenfaces recognition, and Fisherfaces recognition are required separately. It is interesting that for eigenfaces recognition, dropping top four eigenvfaces actually induce better learning results. This might be caused by
Recent Comments