My experience as a first-time Delegate at AI Field Day 3
A few weeks ago, Keiran Shelden @Keiran_shelden of the Tech Breakfast Podcast introduced me to Stephen Foskett from Gestalt IT. Stephen has been running Tech Field Day events for many years, yet I was a bit surprised that the name of this particular upcoming event was AI Field Day 3. What do I know about ML (Machine Learning) and AI (Artificial Intelligence)? Well, I do know a little, at least in the context how autonomous driving works. It's an industry I've been following for many years, a possibility that I've been pondering since my days in (unfinished) grad school. Back in 1993, I had the opportunity to use IR-based eye tracking equipment to see what part of a computer monitor a subject's eyes were looking at, and I clearly remember imagining how cool it would be to see where a driver's eyes look while driving. Why can't cars watch the roadways with us, helping us more safely and autonomously navigate our roadways too?
The idea for this AI Field Day was for me to participate as a delegate (panelist), enjoying discussions with a variety of companies who sponsor the event, after they present their AI-related technologies to us. Participation sounded fun to me, so I requested those 2 days off, then let Stephen know I was in, all-in.
Soon after day 1 of the live-streamed event, I got to experience the off camera part. Right after each vendor pitch, the live stream is turned off. Delegates then take turns giving direct feedback to the presenter(s), critiquing their work and their content, helping them further master their craft. At typical IT events, such blunt but constructive feedback is rather rare. This is the truly unique part of the experience that I could easily imagine gaining tremendous value from, as an occasional, rather rough-around-the-edges public speaker myself.
The even more appealing aspect of this AI Field Day for me personally was the chance to enjoy open discussions with other delegates about AI-related topics. Gladly, videos of these discussions were published today too. While everybody said stuff far smarter than what I came up, at least I tried.
The diverse delegates were largely from an IT and/or IT Storage background, with organizers flying eight delegates in to be in the temporary studio in a Santa Clara CA hotel room, and five delegates participating virtually using Zoom from around the globe. Their aimed (by human) camera set up worked well, ensuring each of us could see whoever was speaking quite easily. You can read all about the sponsors and the delegates here.
It was an extra bonus for me that I already knew Karen Lopez @datachick from John Mark Troyer's really fun Tech Reckoning conference back in 2015, and I also sort of know Joep Piscaer from various articles I wrote featuring his work, including his creative use GPUs and USB devices connected to VMware ESXi VMs. It was also really fun to hear Shimon Ben-David present for WEKA, since he and I both worked on IBM XIV Storage while at IBM. Small world!
I'm so glad I got to enjoy learning lots of new-to-me concepts and technologies from NetApp, Intel, WEKA, and ddn during these two packed-agenda days. I also had some Twitter fun with the images from the Intel presentation, especially since they dove a bit further into the hardware specifics, which is kind of my thing.
Any fears or concerns I had going into this new experience were unfounded, and I'd highly recommend this experience to any IT Professional who is up for something like his. The entire Gestalt IT crew was an absolute pleasure and very professional, with my special thanks owed to Emily Scafidi, Rachel Fritz, Matt Garvin, Megan Gordan, Andrew Blackburn, and of course to both Stephen Foskett and Keiran Shelden for making this happen. Also, thank you for the swag that arrived at my doorstep yesterday, a rather nice touch.
Despite my attempt to frame the autonomous driving discussion in terms of increasing safety, before long, other delegates were quick to steer the conversation clear from driving autonomy. I get it, perhaps the subject was just too polarizing for their taste, or too difficult to be able to discuss without devolving into something less-than-constructive.
That said, I would still like to point out that the first thing that springs to most folks when the topic of AI comes up is self-driving cars, even though driver-less cars don't yet exist, at least not in widespread deployments. Gladly, I have been enjoying an earned front-row seat to semi-autonomous driving beta software in my daily driver EV, my 2018 Model 3. I've found it to be quite wonderful to get to witness its growing maturity since I first tested the #FSDBeta in October of 2021.
The idea for this incorrectly named "Full Self Driving" optional feature is for the massive real-world data set to be used for machine learning, which allows Tesla to develop improved firmware that is then pushed out to the fleet's water-cooled, dual-processor, ~72 watt "Full Self Driving" computer, using a proven Over-The-Air update process via WiFi or cellular. This dashboard computer (called FSD Hardware 3 or HW3) processes the 8 camera feeds locally, and acts accordingly based on the software code it's running. When manually engaged, it can control acceleration, braking, and steering, but the driver is easily able to take over completely at any moment. The driver must also stay attentive or the system disengages, and all participants in the beta are made very aware that it's the human responsible for the driving, not the car. Currently, the objects in the data are labeled by humans, many of them in Buffalo NY, as explained at Electrek:
The automaker listed some of the responsibilities of a data labeler:
- You will use the Autopilot labeling interface to label images critical to training our deep neural networks.
- You will interact with the computer vision engineers on the Autopilot team to help us improve on the design of an efficient labeling interface.
- You will be expected to gain basic computer vision and machine learning knowledge to better understand how the labels are used by our learning algorithms, as this will allow you to make more judgement calls on difficult edge cases that might come up during labeling.
Eventually, plans are for them to be auto-labeled by the Dojo Supercomputer.
One more little aside: I worked for IBM for 21 years on x86 Server stuff. I just found out that the term Machine Language was actually coined by IBM employee Arthur Samuel back in 1959!
I hope you'll enjoy viewing some of the videos from #AIFD3 that I've collected for you below, and as always, feel free to leave a comment for the Tech Field Day crew below the videos, and/or comments for me below this article. There are excellent long-form articles about the event's AI discussions below that you'll want to check out too.
Here's the complete playlist for AI Field Day 3:
Automatically plays all videos from the event
Is it Possible to Create a Truly Unbiased AI? An AI Field Day Roundtable:
In this special Field Day Roundtable discussion, Stephen Foskett asks the AI Field Day 3 delegates whether it is possible to create a truly un-biased AI. Given that machine learning is taking over so many decisions in our lives, how should we train ML to avoid bias? Or is bias inevitable given that it uses human experience as an input? And what is bias really?
Recorded in Santa Clara, CA on May 19, 2022 as part of AI Field Day 3. Visit https://TechFieldDay.com/event/aifd3/ to learn more.
Imagining New Applications for AI: An AI Field Day Roundtable
In this special Field Day Roundtable discussion, Stephen Foskett asks the AI Field Day 3 delegates to imagine the future of AI. What exciting applications are just around the corner that will leverage machine learning and other artificial intelligence technologies to better our lives?
Recorded in Santa Clara, CA on May 19, 2022 as part of AI Field Day 3. Visit https://TechFieldDay.com/event/aifd3/ to learn more.
Bio - Paul Braren
Paul is a 30th year IT Pro by day, an 11th year technical content creator at TinkerTry.com and podcaster by night, and a 4th year electric vehicle and ADAS (Advanced Driver Assistance Systems) enthusiast who enjoys sharing his Model 3 learnings at many EVents on weekends.
Paul is an IT Professional who also creates technical content at TinkerTry.com and is active in the VMware vExpert and VMUG Community. He’s acting as the Senior EV Correspondent for Tech Breakfast Podcast, and he’s helping with EV Club of Connecticut advocacy. He’s also a cautious Tesla “Full Self Driving” beta tester since October of 2021, but he’s been pondering various AI, ML, and ADAS (Advanced Driver Assist Systems) approaches to this public safety challenge for over 30 years.
Meet Field Day Delegate – Paul Braren
How did you get into Technology and IT?
During graduate school, I found I was spending more time optimizing the performance of the Silicon Graphics computer used for eye-tracking research than I was studying. After a successful stint in helpdesk support that I quite enjoyed, we also had baby was on the way. So I confess, my actual “career” path in IT really began as I traveled extensively to teach resellers about the joys of OS/2. This lead to growth into a variety of infrastructure, team lead, and customer-facing roles, and I’m so very thankful for all of it!
What do you do now? Tell us a little about your current role.
I’m an Advisory Solutions Architect at Dell Technologies, working with a variety of Enterprise customer across New England, helping them with their datacenter and cloud solutions.
Disclosure: I hold no stock and never held stock in any of the companies mentioned in this article.
See also at TinkerTry
- Featured on Tech Breakfast Podcast 250 | The Osborne Effect - Twitter Chaos - Death of ICE - XR - Battery Tech | Aaron, Russ, Tyler, Paul
Apr 19 2022
- Tesla Model 3 Boston and New York City Road Trip Tips - safely handle rain, snow, and heavy traffic with ease and efficiency
Mar 24 2022
- Little House Brewing Company in Chester Connecticut, built in 1836, now has a Tesla Solar Roof!
Feb 02 2022
- Tesla waypoints make it easy to decide how much longer to Supercharge on multi-stop road trips
Jan 13 2022
- Michelin CrossClimate 2 All Weather Tires Review - a safe year-round choice in rain/snow, hot/cold
Dec 10 2021
- Enduring the Tesla Safety Score experience on the road to a safer, more autonomous future
Oct 12 2021
- Model 3 HW3 brain replacement can cause temporary amnesia but Tesla Service can quickly restore your settings, all you need to know before you go
Feb 27 2020
Excellent, related videos by Matt Ferrell that talk about how AI can help with climate change solutions, including wind turbine placement, along with cheaper and faster climate change modeling using IBM Quantum Computing.
- Feeding AI hungry applications efficiently at scale
May 23 2022 by Liselotte Foverskov at Textrovert
With DDN A3I AI400X2 the AI applications can consume 90 GB/s and 3 million IOPS out of the box. One of the presenters, William Beaudin, Senior Director of Solutions Engineering at DDN, demonstrated real life examples of their customers data sets. DDN power some of the most data intensive workloads and largest ML/DL data sets in the world. The data sets require intensive performance as some of the training applications consume 1 TB/s from storage in real time. With all of the examples of data sets DDN showed, they all required speed at both read and writes. This is an important point and unfortunately some organizations miss this when they implement AI… and why they don’t succeed with their project.
- AI bias isn’t a technical discussion
May 24 2022 by Gina Rosenthal at 24x7 IT Connection
Everyone’s a little bit biased
Maybe you’re thinking – I’m a good person. I’m not biased. But here’s the deal: we all have bias. It is human nature. In my majors in the liberal arts programs we were taught to actively expose our own bias so we could also work to neutralize it.
If having bias is part of being human, we must acknowledge it as we research. When it comes to AI, how can you trust any decision if bias has influenced your data or even your theory?
AI bias has the potential to impact many people, so it’s critical to try and identify and neutralize bias at each stage of the process.
- How We Avoid Collisions With Stationary and Moving Obstacles
Sep 1995 by James E. Cutting, Peter M. Vishton, and Paul A. Braren in Psychological Review
Here's the very first time I became published, in 1992, during my time at Cornell as a graduate student and teaching assistant.
- Wayfinding on Foot From Information in Retinal, Not Optical, Flow
Apr 1992 by James E. Cutting, Ken Springer, Paul A. Braren, and Scott H. Johnson in Journal of Experimental Psychology
You can view all the fun good stuff I posted about #AIFD3 on Twitter.
Live nowhttps://t.co/AqkSRjifNw— Paul Braren | TinkerTry.com 🖥️🔌☀️🔋🚗 (@paulbraren) May 18, 2022
Sponsors@IntelAI @ddn_limitless @WekaIO @NetApp
Delegates@andybanta @ArjanTim @TheKanter @FredericVHaren @JPiscaer @JPWarren @DataChick @LFoverskov @PaulBraren @MackenzieWifi @RayLucchesi @SFoskett
See also @CerebrasSystems#AIFD3 @GestaltIT pic.twitter.com/zgZ4lVzBvt
Here are pictures of the Gaudi2 which is a big jump. pic.twitter.com/bmYmf5U6MI— Patrick J Kennedy (@Patrick1Kennedy) May 19, 2022
Oh yes, thank you Patrick, kinda figured you might have such pics handy from @ServeTheHome!https://t.co/IDYWqw7NgG@SFoskett, thank you for inviting me to my first @TechFieldDay, I really enjoyed it, even as a virtual delegate. Sorry I couldn't be on site with you all!#AIFD3 pic.twitter.com/PcD5M9rBTT— Paul Braren | TinkerTry.com 🖥️🔌☀️🔋🚗 (@paulbraren) May 19, 2022
Joep, sorry we didn't get to chat more, but it sure was good to (virtually) meet you at #AIFD3 last week!— Paul Braren | TinkerTry.com 🖥️🔌☀️🔋🚗 (@paulbraren) May 26, 2022
I fondly remember our days of tinkering with GPUs and USB under VMware ESXi, very few are so brave!
Long before the Dell spinoff & now @Broadcom era. https://t.co/V93mFV2Tyn pic.twitter.com/2of22IcUc3
Fun swag box just arrived, thank you @WekaIO & @SFoskett @GestaltIT! pic.twitter.com/jJRcseIjEC— Paul Braren | TinkerTry.com 🖥️🔌☀️🔋🚗 (@paulbraren) May 25, 2022