Student – driven machine learning MIT News

From the early days of MIT, and even earlier, Emma Liu ’22, MNG ’22, Yo-whan “John” Kim ’22, MNG ’22 and Clemente Ocejo ’21, MNG ’22 knew they wanted to do computational research. and explore artificial intelligence and machine learning. “Since high school, I’ve been in deep learning and involved in projects,” says Kim, who participated in a summer program at MIT and Harvard University’s Research Science Institute (RSI) and continued to work on the recognition of the action in the videos. Microsoft’s Kinect.

As students of the Department of Electrical and Computer Engineering have just graduated from the Master of Thesis (MEng) Thesis Program, Liu, Kim and Ocejo have developed skills to help them lead application-based projects. Working with MIT-IBM Watson AI Lab, they have improved the classification of texts with limited tagged data and designed machine learning models to provide a better long-term forecast for product purchases. For Kim, “it was a very smooth transition and … a great opportunity for me to continue working in the field of deep learning and computer vision at the MIT-IBM Watson AI Lab.”

Modeling video

In collaboration with academia and industry researchers, Kim designed, trained, and tested an in-depth learning model to identify domain actions (in this case, video). He focused on the use of synthetic data from videos created to prepare his team, and performed prediction and inference tasks on real data, which consisted of different classes of action. They wanted to see pre-training models for synthetic videos, especially those created by human or human action simulations or game engines, for real data: publicly available videos taken from the Internet.

The reason for this research, says Kim, is that real videos can have problems, such as image bias, copyright, and / or ethical or personal sensitivity; for example, it would be difficult for a car to capture videos that hit people or use people. faces, actual addresses or license plates without permission. Kim is experimenting with 2D, 2.5D, and 3D video models to create specific domains or even a large set of synthetic data that can be used for some transfer domains where data is missing. For example, for construction industry applications, this may include launching a declaration of action at a construction site. “I didn’t expect synthetically created videos to work on par with real videos,” he says. “I think that opens up a lot of different roles [for the work] in the future. “

Despite the rocky start to the project, collecting and creating data and running many models, Kim says he wouldn’t do it any other way. “It was amazing how the members of the lab encouraged me: ‘Okay. You’ll have all the experiments and the fun part to come. Don’t stress too much.'” It was this structure that helped Kimi own the work. and they gave me amazing ideas. ”

Data labeling

The lack of data was also the subject of Emma Liu’s work. “The main problem is that there is all this data in the world, and in the case of many machine learning problems, you have to label that data,” says Liu, “but then you have all this untagged data available. It’s not really taken advantage of.”

Liu, led by his MIT and IBM team, worked to use this data to add pseudo-tags to semi-supervised text classification models (and combining their aspects) with untagged data based on both predictions and probabilities about which categories. each previously unlabeled data is entered. “The problem then is that what has been done beforehand has proved that one cannot always rely on probabilities; specifically, neural networks have often shown overconfidence, ”Liu said.

Liuk and his team evaluated and recalibrated the accuracy and uncertainty of the models to improve their self-training framework. The self-training and calibration step allowed him to have a better confidence in the predictions. These pseudo-tagged data, he says, can be added to the real data set, expanding the data set; this process could be repeated in several iterations.

For Liu, his biggest move was not the product, but the process. “I learned a lot about being an independent researcher,” he says. As an undergraduate student, Liu worked with IBM to improve the ability to develop machine learning methods for decision-making and reuse of existing drugs. After collaborating with academic and industry researchers, Liu and the cohorts of MEng students working with the MIT-IBM Watson AI Lab felt like asking questions, looking for experts, digesting and presenting scientific articles, and testing ideas. they relied on their knowledge, freedom, and flexibility to decide the direction of their research. Taking on this key role, Liu says, “I feel like I own my project.”

Demand forecast

After being at MIT and MIT-IBM Watson AI Lab, Clemente Ocejo also gained a sense of mastery, building a solid foundation in AI techniques and time series methods with the MIT Undergraduate Research Opportunities Program (UROP). he met his MEng advisor. “You have to be really proactive in making decisions,” says Ocejo [your choices] as a researcher and let people know that this is what you are doing. ”

Ocejo used traditional time-series methods to collaborate with the laboratory by applying in-depth learning to better predict product demand in the medical field. Here he designed, wrote and trained a transformer, a specific model of machine learning it is commonly used in natural language processing and has the ability to learn very long-term addictions. Ocejo and his team compared the requirements of the planned targets over the months, learning dynamic connections and the weight of attention between product sales within a product family. They looked at the characteristics of the identifiers, in terms of price and amount, as well as the characteristics of the account of who buys the items or services.

“A product doesn’t necessarily affect the announcement of another product at the time of the announcement. It only affects the workouts that lead to that announcement,” says Ocejo. we’ve added this layer that learns attention from all the products in the dataset. “

In the long run, in a one-year forecast, the MIT-IBM Watson AI Lab team was able to surpass the current model; more spectacularly, he did it in the short term (close to a fiscal quarter). Ocejo attributes this to the dynamics of his interdisciplinary team. “A lot of people in my team didn’t necessarily have a lot of experience in deep learning things, but they had a lot of experience in supply chain management, operations research and optimization, and that’s something I don’t have. Have had so much experience,” says Ocejo. they were giving a lot about pigs and … and knowing what the industry wanted to see or what they wanted to improve, so it was very helpful to ease my attention. “

For this work, a bunch of data did not favor Ocejo and his team, but rather its structure and presentation. Often, large-scale deep learning models require millions and millions of data points to make meaningful conclusions; however, the MIT-IBM Watson AI Lab team demonstrated that results and technical improvements can be application-specific. “It shows that these models can learn something useful, in the right setting, with the right architecture, without the need for excessive amounts of data,” says Ocejo. “And then with too much data, it will get better.”


Leave a Comment