DEV Community

Cover image for Recapping the AI, Machine Learning and Computer Meetup — November 14, 2024
Jimmy Guerrero for Voxel51

Posted on • Edited on • Originally published at voxel51.com

Recapping the AI, Machine Learning and Computer Meetup — November 14, 2024

We just wrapped up the November ‘24 AI, Machine Learning and Computer Vision Meetup, and if you missed it or want to revisit it, here’s a recap! In this blog post you’ll find the playback recordings, highlights from the presentations and Q&A, as well as the upcoming Meetup schedule so that you can join us at a future event.

Human-in-the-loop: Practical Lessons for Building Comprehensive AI Systems

AI systems often struggle with data limitations, data distribution shift over time, and a poor user experience. Human-in-the-loop design offers a solution by placing users at the center of AI systems and leveraging human feedback for continuous improvement.

In this talk, we’ll dive deeply into a recent project at Merantix Momentum: A interactive tool for automatic rodent behaviour analysis in videos at a large scale. We’ll discuss the machine learning components, including pose estimation, behavior classification, and active learning and talk about the technical challenges and the impact of the project.

Speaker: Adrian Loy has a Msc in IT Systems Engineering and spent the last 5 years at Merantix Momentum planning and executing Computer Vision Projects for a variety of clients. He is currently leading the Machine Learning Engineering Team at Momentum.

Q&A

  • What are the common challenges and how do you overcome in CV when running the inference on edge ?
  • How do you handle the temporal and spatial aspect of key points just with XGBoost? What are the reasons for not using some sequential models such as LSTM/GRU or even transformers?
  • Can you induce rules of behavior, perhaps with LLMs. I think it is mostly supervised
  • How large was the dataset you trained on? Which pre-trained model(s) did you try and use for the key point detection and how long did it take to train the XGBoost?
  • How many key points did you use? Would 4 (tail, body center, head center, nose) be enough?
  • Do you think using CG simulated data can play an important part in this?
  • Where do you store inference results ?
  • Would YOLO be useful also for classification?

Deploying ML models on Edge Devices using Qualcomm AI Hub

In this talk we address the common challenges faced by developers migrating AI workloads from the cloud to edge devices. Qualcomm aims to democratize AI at the edge, easing the transition to the edge by supporting familiar frameworks and data types. ​This is where Qualcomm AI Hub comes in. Developers can follow along, gaining knowledge and tools to efficiently deploy optimized models on real devices using Qualcomm AI Hub. ​

We’ll walk through how to get started using Qualcomm AI Hub, go through examples on how to optimize models and bundle the downloadable target asset into your application and talk through iterating on your model and meet performance requirements to deploy on device!

Speaker: Bhushan Sonawane has optimized and deployed more than 1000s of AI models on-device on iOS and Android ecosystem. Currently, he is building AI Hub at Qualcomm to make on-device journey on Android and Snapdragon platform as seamless as possible.

Q&A

  • What platform could be comparable for deploying models to the iPhone?
  • What could be the reason for a model layer to be not compatible with NPU? What’s the difference?
  • Does Qualcomm AI Hub store the models? Or is it safe to use proprietary custom models?
  • Does the compilation take place in an actual device or is there a cross-compiling method taking place?
  • Do you have any recommendations for the use of models in extended reality (e.g. for computer vision tasks in AR or VR)? I recently had an issue importing a model into Unity as it didn’t recognize the TFlite and onnx-format-models.
  • From the efficient net paper they recommend using grouped convolution (from ResNext) instead of using Depthwise convolution. Is there this sort of analysis done by Qualcomm?

Curating Excellence: Strategies for Optimizing Visual AI Datasets

In this talk Harpreet will discuss common challenges plaguing visual AI datasets, their impact on model performance, and share some tips and tricks for curating datasets to make the most of any compute budget or network architecture.

Speaker: Harpreet Sahota is a hacker-in-residence and machine learning engineer with a passion for deep learning and generative AI. He’s got a deep interest in RAG, Agents, and Multimodal AI.

Join the AI, Machine Learning and Computer Vision Meetup!

The goal of the Meetups is to bring together communities of data scientists, machine learning engineers, and open source enthusiasts who want to share and expand their knowledge of AI and complementary technologies.

Join one of the 12 Meetup locations closest to your timezone.

What’s Next?

Image description

Missed ECCV 2024 in Milan? We’ve lined up some top-notch speakers and presentations from the conference as part of our ECCV Redux series. Check them out below:

𝗗𝗮𝘆 𝟭: 𝗡𝗼𝘃𝗲𝗺𝗯𝗲𝗿 𝟭𝟵

𝗗𝗮𝘆 𝟯: 𝗡𝗼𝘃𝗲𝗺𝗯𝗲𝗿 𝟮𝟭

𝗗𝗮𝘆 𝟰: 𝗡𝗼𝘃𝗲𝗺𝗯𝗲𝗿 𝟮𝟮

Dive into the groundbreaking research, all from the comfort of your own space.

Register for the Zoom here. You can find a complete schedule of upcoming Meetups on the Voxel51 Events page.

Get Involved!

There are a lot of ways to get involved in the Computer Vision Meetups. Reach out if you identify with any of these:

  • You’d like to speak at an upcoming Meetup
  • You have a physical meeting space in one of the Meetup locations and would like to make it available for a Meetup
  • You’d like to co-organize a Meetup
  • You’d like to co-sponsor a Meetup

Reach out to Meetup co-organizer Jimmy Guerrero on Meetup.com or ping me over LinkedIn to discuss how to get you plugged in.

These Meetups are sponsored by Voxel51, the company behind the open source FiftyOne computer vision toolset. FiftyOne enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster. It’s easy to get started, in just a few minutes.

Top comments (0)