消息
×
loading..
Redirected transfer learning for robust multi-layer subspace learning
2024-03-01
发表期刊PATTERN ANALYSIS AND APPLICATIONS (IF:3.7[JCR-2023],2.7[5-Year])
ISSN1433-7541
EISSN1433-755X
卷号27期号:1
发表状态已发表
DOI10.1007/s10044-024-01233-8
摘要

Unsupervised transfer learning methods usually exploit the labeled source data to learn a classifier for unlabeled target data with a different but related distribution. However, most of the existing transfer learning methods leverage 0-1 matrix as labels which greatly narrows the flexibility of transfer learning. Another major limitation is that these methods are influenced by the redundant features and noises residing in cross-domain data. To cope with these two issues simultaneously, this paper proposes a redirected transfer learning (RTL) approach for unsupervised transfer learning with a multi-layer subspace learning structure. Specifically, in the first layer, we first learn a robust subspace where data from different domains can be well interlaced. This is made by reconstructing each target sample with the lowest-rank representation of source samples. Besides, imposing the L2,1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{2,1}$$\end{document}-norm sparsity on the regression term and regularization term brings robustness against noise and works for selecting informative features, respectively. In the second layer, we further introduce a redirected label strategy in which the strict binary labels are relaxed into continuous values for each datum. To handle effectively unknown labels of the target domain, we construct the pseudo-labels iteratively for unlabeled target samples to improve the discriminative ability in classification. The superiority of our method in classification tasks is confirmed on several cross-domain datasets.

关键词Transfer learning Robustness Sparsity Regression Discriminative
URL查看原文
收录类别SCI
语种英语
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:001173275700021
出版者SPRINGER
文献类型期刊论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/372817
专题信息科学与技术学院_PI研究组_孙露组
通讯作者Bao, Jiaqi
作者单位
1.Hokkaido Univ, Sapporo, Hokkaido, Japan
2.Shanghai Tech Univ, Shanghai, Peoples R China
推荐引用方式
GB/T 7714
Bao, Jiaqi,Kudo, Mineichi,Kimura, Keigo,et al. Redirected transfer learning for robust multi-layer subspace learning[J]. PATTERN ANALYSIS AND APPLICATIONS,2024,27(1).
APA Bao, Jiaqi,Kudo, Mineichi,Kimura, Keigo,&Sun, Lu.(2024).Redirected transfer learning for robust multi-layer subspace learning.PATTERN ANALYSIS AND APPLICATIONS,27(1).
MLA Bao, Jiaqi,et al."Redirected transfer learning for robust multi-layer subspace learning".PATTERN ANALYSIS AND APPLICATIONS 27.1(2024).
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Bao, Jiaqi]的文章
[Kudo, Mineichi]的文章
[Kimura, Keigo]的文章
百度学术
百度学术中相似的文章
[Bao, Jiaqi]的文章
[Kudo, Mineichi]的文章
[Kimura, Keigo]的文章
必应学术
必应学术中相似的文章
[Bao, Jiaqi]的文章
[Kudo, Mineichi]的文章
[Kimura, Keigo]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。