BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Namur Institute For Complex Systems - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://www.naxys.be
X-WR-CALDESC:Events for Namur Institute For Complex Systems
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Brussels
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Brussels:20250227T130000
DTEND;TZID=Europe/Brussels:20250227T140000
DTSTAMP:20260502T175757
CREATED:20250102T113709Z
LAST-MODIFIED:20250102T113746Z
UID:2272-1740661200-1740664800@www.naxys.be
SUMMARY:Benoît Legat (UCLouvain)
DESCRIPTION:Title : Hidden convexity in linear neural networks \nAbstract : \nTraining neural networks involves minimising a loss function that is nonconvex with respect to the network’s weights. Despite this nonconvexity\, when the optimization converges to a local minimum\, it is often close to globally optimal. This transfer from local properties to global properties is often achieved through convexity in optimization which neural networks seem to lack\, or is it hidden ? There are two sources of nonconvexity in neural networks : 1) the nonlinear activation functions and 2) the multilinear product of the weight matrices. \nInterestingly\, recent research has demonstrated that the second source does not\, on its own\, lead to local minima that are not global when paired with a mean squared error loss. Although this result is promising\, the complexity of the proof limits its generalization to more complex models\, such as those with nonlinear activation functions or other loss structures. In this talk\, we reveal the convexity hidden in the problem and show how it allows for a simpler and more insightful proof. By exposing this underlying structure\, we aim to open the door to recognizing which types of models are more likely to train well and to extend this understanding to other machine learning architectures. \nThe seminar will take place in Room S08 at the Faculty of Sciences.
URL:https://www.naxys.be/event/benoit-legat-uclouvain/
CATEGORIES:NAXYS Seminar
END:VEVENT
END:VCALENDAR