{"id":68619,"date":"2024-07-15T16:38:44","date_gmt":"2024-07-15T08:38:44","guid":{"rendered":"https:\/\/inventec2.mjitec.tw\/?page_id=68619"},"modified":"2024-07-15T16:38:44","modified_gmt":"2024-07-15T08:38:44","slug":"learning-with-instance-dependent-noisy-labels-by-anchor-hallucination-and-hard-sample-label-correction","status":"publish","type":"page","link":"https:\/\/inventec2.mjitec.tw\/en\/ai\/learning-with-instance-dependent-noisy-labels-by-anchor-hallucination-and-hard-sample-label-correction\/","title":{"rendered":"Learning with Instance-dependent Noisy Labels"},"content":{"rendered":"<div class=\"wpb-content-wrapper\"><p>[vc_row full_width=&#8221;stretch_row&#8221;][vc_column]<div id=\"rs-space-69e10d22bc0ec\" class=\"rs-space\">\r\n                <div class=\"rs-space-data\" data-conf=\"{&quot;uqid&quot;:&quot;69e10d22bc0ec&quot;,&quot;space_lg&quot;:&quot;150&quot;,&quot;space_md&quot;:&quot;80&quot;,&quot;space_sm&quot;:&quot;60&quot;,&quot;space_xs&quot;:&quot;60&quot;}\"><\/div>\t\t\t\r\n\t\t\t<\/div>[vc_row_inner el_class=&#8221;md-full-col&#8221;][vc_column_inner el_class=&#8221;m_p&#8221; width=&#8221;1\/2&#8243;]\n        <div class=\"rs-heading    \">\n        \t<div class=\"title-inner\"  data-border-color=\"\">\n        \t\t\n\t            \n\t            <h2 class=\"title \" style=\"color: #333333\">Learning with Instance-dependent Noisy Labels by Anchor Hallucination and Hard Sample Label Correction <\/h2>\n\t        <\/div><\/div>[vc_column_text css=&#8221;.vc_custom_1721032481719{margin-bottom: 20px !important;}&#8221;]International Conference on Image Processing (ICIP 2024)[\/vc_column_text][vc_column_text css=&#8221;.vc_custom_1660547387771{margin-bottom: 5px !important;}&#8221;]<\/p>\n<div>\n<h6>Authors<\/h6>\n<\/div>\n<p>[\/vc_column_text][vc_column_text css=&#8221;.vc_custom_1721032492631{margin-bottom: 20px !important;}&#8221;]Po-Hsuan Huang*, Chia-Ching Lin*, Chih-Fan Hsu, Ming-Ching Chang, Wei-Chao Chen[\/vc_column_text][vc_column_text css=&#8221;.vc_custom_1660547399021{margin-bottom: 5px !important;}&#8221;]<\/p>\n<div>\n<h6>Published<\/h6>\n<\/div>\n<p>[\/vc_column_text][vc_column_text css=&#8221;&#8221;]<\/p>\n<div>\n<p>2024\/3\/18<\/p>\n<\/div>\n<p>[\/vc_column_text][\/vc_column_inner][vc_column_inner el_class=&#8221;m_p&#8221; width=&#8221;1\/2&#8243;][vc_single_image image=&#8221;68599&#8243; img_size=&#8221;full&#8221; css=&#8221;&#8221;][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row][vc_row][vc_column]<div id=\"rs-space-69e10d22bc1d5\" class=\"rs-space\">\r\n                <div class=\"rs-space-data\" data-conf=\"{&quot;uqid&quot;:&quot;69e10d22bc1d5&quot;,&quot;space_lg&quot;:&quot;150&quot;,&quot;space_md&quot;:&quot;80&quot;,&quot;space_sm&quot;:&quot;60&quot;,&quot;space_xs&quot;:&quot;60&quot;}\"><\/div>\t\t\t\r\n\t\t\t<\/div>[\/vc_column][\/vc_row][vc_row full_width=&#8221;stretch_row&#8221;][vc_column][vc_row_inner content_placement=&#8221;top&#8221; css=&#8221;.vc_custom_1657794580528{margin-bottom: 20px !important;}&#8221;][vc_column_inner el_class=&#8221;m_p paragraph_title&#8221; width=&#8221;1\/3&#8243;]\n        <div class=\"rs-heading   vc_custom_1660547413806  \">\n        \t<div class=\"title-inner\"  data-border-color=\"\">\n        \t\t\n\t            \n\t            <h2 class=\"title \" style=\"color: #333333\">Abstract <\/h2>\n\t        <\/div><\/div>[\/vc_column_inner][vc_column_inner el_class=&#8221;m_p&#8221; width=&#8221;2\/3&#8243;][vc_column_text css=&#8221;&#8221;]Learning from noisy-labeled data is crucial for real-world applications. Traditional Noisy-Label Learning (NLL) meth-ods categorize training data into clean and noisy sets based on the loss distribution of training samples. However, they often neglect that clean samples, especially those with intricate visual patterns, may also yield substantial losses. This oversight is particularly significant in datasets with Instance-Dependent Noise (IDN), where mislabeling probabilities correlate with visual appearance.<\/p>\n<p>Our approach explicitly distinguishes between clean vs. noisy and easy vs. hard samples. We identify training samples with small losses, assuming they have simple patterns and correct labels. Utilizing these easy samples, we hallucinate multiple anchors to select hard samples for label correction. Corrected hard samples, along with the easy samples, are used as labeled data in subsequent semi-supervised training. Experiments on synthetic and real-world IDN datasets demonstrate the superior performance of our method over other state-of-the-art NLL methods.[\/vc_column_text][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row][vc_row][vc_column]<div id=\"rs-space-69e10d22bc289\" class=\"rs-space\">\r\n                <div class=\"rs-space-data\" data-conf=\"{&quot;uqid&quot;:&quot;69e10d22bc289&quot;,&quot;space_lg&quot;:&quot;80&quot;,&quot;space_md&quot;:&quot;80&quot;,&quot;space_sm&quot;:&quot;60&quot;,&quot;space_xs&quot;:&quot;60&quot;}\"><\/div>\t\t\t\r\n\t\t\t<\/div>[\/vc_column][\/vc_row][vc_row][vc_column width=&#8221;1\/3&#8243; el_class=&#8221;m_p keyword_title&#8221;][vc_column_text]<\/p>\n<h2>Keywords<\/h2>\n<p>[\/vc_column_text][\/vc_column][vc_column width=&#8221;2\/3&#8243; el_class=&#8221;m_p keyword&#8221;][vc_row_inner content_placement=&#8221;middle&#8221;][vc_column_inner width=&#8221;1\/3&#8243;][vc_raw_html css=&#8221;&#8221;]JTNDdWwlMjBjbGFzcyUzRCUyMnN0eWxlbGlzdGluZyUyMiUzRSUwQSUyMCUwOSUzQ2xpJTIwc3R5bGUlM0QlMjJsaW5lLWhlaWdodCUzQTM0cHglM0IlMjIlM0VOb2lzeSUyMExhYmVsJTIwTGVhcm5pbmclM0MlMkZsaSUzRSUwQSUzQyUyRnVsJTNF[\/vc_raw_html][\/vc_column_inner][vc_column_inner width=&#8221;1\/3&#8243;][vc_raw_html css=&#8221;&#8221;]JTNDdWwlMjBjbGFzcyUzRCUyMnN0eWxlbGlzdGluZyUyMiUzRSUwQSUyMCUwOSUzQ2xpJTIwc3R5bGUlM0QlMjJsaW5lLWhlaWdodCUzQTM0cHglM0IlMjIlM0VTZW1pLXN1cGVydmlzZWQlMjBMZWFybmluZyUzQyUyRmxpJTNFJTBBJTNDJTJGdWwlM0U=[\/vc_raw_html][\/vc_column_inner][vc_column_inner width=&#8221;1\/3&#8243;][vc_raw_html css=&#8221;&#8221;][\/vc_raw_html][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row][vc_row][vc_column]<div id=\"rs-space-69e10d22bc2bf\" class=\"rs-space\">\r\n                <div class=\"rs-space-data\" data-conf=\"{&quot;uqid&quot;:&quot;69e10d22bc2bf&quot;,&quot;space_lg&quot;:&quot;80&quot;,&quot;space_md&quot;:&quot;80&quot;,&quot;space_sm&quot;:&quot;60&quot;,&quot;space_xs&quot;:&quot;60&quot;}\"><\/div>\t\t\t\r\n\t\t\t<\/div>[\/vc_column][\/vc_row][vc_row full_width=&#8221;stretch_row&#8221; el_class=&#8221;bg&#8221; css=&#8221;.vc_custom_1657248474326{padding-top: 50px !important;padding-bottom: 50px !important;}&#8221;][vc_column][vc_column_text css=&#8221;.vc_custom_1660547451636{margin-bottom: 20px !important;}&#8221;]<\/p>\n<h3 style=\"text-align: center; color: #fff;\">Download<\/h3>\n<p>[\/vc_column_text][vc_row_inner content_placement=&#8221;middle&#8221;][vc_column_inner el_class=&#8221;download_btn_wrap&#8221;][vc_btn title=&#8221;PDF&#8221; style=&#8221;flat&#8221; color=&#8221;white&#8221; align=&#8221;center&#8221; css=&#8221;&#8221; link=&#8221;url:https%3A%2F%2Farxiv.org%2Fabs%2F2407.07331|target:_blank&#8221; el_class=&#8221;download_btn&#8221;][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row]<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>[vc_row full_width=&#8221;stretch_row&#8221;][vc_column][vc_row_inner el_class=&#8221;md-full-col&#8221;][vc_column_inner el_class=&#8221;m_p&#8221; width=&#8221;1\/2&#8243;][vc_column_text css=&#8221;.vc_custom_1721032481719{margin-bottom: 20px !important;}&#8221;]International Conference on Image Processing (ICIP 2024)[\/vc_column_text][vc_column_text css=&#8221;.vc_custom_1660547387771{margin-bottom: 5px !important;}&#8221;] Authors [\/vc_column_text][vc_column_text css=&#8221;.vc_custom_1721032492631{margin-bottom: 20px !important;}&#8221;]Po-Hsuan Huang*, Chia-Ching Lin*, Chih-Fan Hsu, Ming-Ching Chang, Wei-Chao Chen[\/vc_column_text][vc_column_text css=&#8221;.vc_custom_1660547399021{margin-bottom: 5px !important;}&#8221;] Published [\/vc_column_text][vc_column_text css=&#8221;&#8221;] 2024\/3\/18 [\/vc_column_text][\/vc_column_inner][vc_column_inner el_class=&#8221;m_p&#8221; width=&#8221;1\/2&#8243;][vc_single_image image=&#8221;68599&#8243; img_size=&#8221;full&#8221; css=&#8221;&#8221;][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row][vc_row][vc_column][\/vc_column][\/vc_row][vc_row full_width=&#8221;stretch_row&#8221;][vc_column][vc_row_inner content_placement=&#8221;top&#8221; css=&#8221;.vc_custom_1657794580528{margin-bottom: 20px !important;}&#8221;][vc_column_inner el_class=&#8221;m_p paragraph_title&#8221; width=&#8221;1\/3&#8243;][\/vc_column_inner][vc_column_inner el_class=&#8221;m_p&#8221; width=&#8221;2\/3&#8243;][vc_column_text css=&#8221;&#8221;]Learning&#8230;<\/p>\n","protected":false},"author":4,"featured_media":0,"parent":4975,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-68619","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/pages\/68619","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/comments?post=68619"}],"version-history":[{"count":1,"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/pages\/68619\/revisions"}],"predecessor-version":[{"id":68622,"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/pages\/68619\/revisions\/68622"}],"up":[{"embeddable":true,"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/pages\/4975"}],"wp:attachment":[{"href":"https:\/\/inventec2.mjitec.tw\/en\/wp-json\/wp\/v2\/media?parent=68619"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}