<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts | Zedong Wang</title><link>https://zedongwang.netlify.app/post/</link><atom:link href="https://zedongwang.netlify.app/post/index.xml" rel="self" type="application/rss+xml"/><description>Posts</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><item><title>Research Proposal towards Efficient Visual Representation Learning</title><link>https://zedongwang.netlify.app/post/snacks-details-of-backpropagation/</link><pubDate>Thu, 14 Apr 2022 10:21:33 +0000</pubDate><guid>https://zedongwang.netlify.app/post/snacks-details-of-backpropagation/</guid><description>&lt;h2 id="1-introduction">&lt;strong>1. I﻿ntroduction&lt;/strong>&lt;/h2>
&lt;p>Deep Neural Networks (DNNs) have been the de facto infrastructure in visual representation learning, which plays a pivotal role in the context of computer vision. Inspired by mammalian visual systems, Convolutional Neural Networks (CNNs), are first designed to capture neighborhood correlations hidden in observed images with convolutional inductive bias and nonlinear activations. By stacking hierarchical convolutional layers, CNNs are able to attain theoretically increasing receptive field for underlying image pattern recognition. Apart from CNNs, Vision Transformers (ViTs) are recently emerged and has achieved promising results on ImageNet. Specifically, ViT splits input images into non-overlapping fixed-size patches as visual tokens to capture long-range feature interactions among these tokens by self-attention mechanism. By introducing regional inductive bias, ViT and its variants have been extended to multifarious vision benchmarks.&lt;/p>
&lt;p>Despite the remarkable advance of deep architectures in the past decade, there are still limitations of existing visual recognition pipelines, where the fundamental challenges to deal with &lt;strong>high image-inherent semantic sparsity&lt;/strong> have not been well addressed. In particular, digital images are optical signals captured by sensors comprising numerous pixels as basic elements. These signals are able to reflect the real-world scenarios objectively, but accordingly own low information as well as semantic density. Correspondingly, natural language as a type of human-created data enjoys high intrinsic semantic density and expression efficiency, which has largely facilitated the development of NLP.&lt;/p>
&lt;p>Therefore, the key for this problem is to improve the utilization of sparse data semantics. Accordingly, possible breakthroughs to the visual representation sparsity can be broadly divided into two aspects: &lt;strong>Efficient Deep Architecture Design (model-end)&lt;/strong> and &lt;strong>Visual Pre-Training&lt;/strong> &lt;strong>(data-end)&lt;/strong>.&lt;/p>
&lt;h2 id="2-efficient-deep-architecture-design">&lt;strong>2. Efficient Deep Architecture Design&lt;/strong>&lt;/h2>
&lt;h3 id="21-preliminary">2.1 Preliminary&lt;/h3>
&lt;p>As aforementioned, how to learn robust contextual image patterns effectively is one of the main theme of high-quality visual representation learning. In this section, I first categorize two types of significant operations that are bound up with the expression capacities: &lt;strong>regionality perception&lt;/strong> and &lt;strong>context aggregation&lt;/strong>. Here, we assume the input feature $X$ and the output $Z$ are in the same shape $\mathbb{R}^{C\times H\times W}$.&lt;/p>
&lt;h4 id="211-regionality-perception">2.1.1 Regionality Perception&lt;/h4>
&lt;p>Since raw images are redundant signals, operations armed with local and structural inductive biases are fundamental components in DNNs, which ensure efficiency and stability during training. I would summarize these operations and network modules that &lt;em>statically&lt;/em> extract contextual features as &lt;em>regionality perception&lt;/em> and define it as $Z = \mathcal{S}(X, W)$, where $\mathcal{S}(\cdot,\cdot)$ can be an arbitrary binary operator (e.g., dot-product, convolution, element-wise product) and $W$ denotes the learnable weight.
Instances of &lt;em>regionality perception&lt;/em> are locally connected and weight-sharing on different positions, such as all kinds of convolutions and even non-parametric operations like pooling. For intance, the convolution operation which is the most commonly used and thoroughly studied, can be written as $Z = \mathcal{S}(X, K)$, where $\mathcal{S}(\cdot,\cdot)$ is the convolution and the kernel $K\in \mathbb{R}^{M\times C\times k\times k}$ consists $M$ filters. To further boost the representation abilities, considerable efforts are spared to make convolution-based regionality perception lighter and more flexible. Some works tend to factorize vanilla convolution into depthwise and pointwise counterparts balancing the efficiency vs. accuracy trade-offs.&lt;/p>
&lt;h4 id="212-context-aggregation">2.1.2 Context Aggregation&lt;/h4>
&lt;p>Apart from &lt;em>static&lt;/em> neighborhood correlations, high-level semantic context modelling is also vital for sound representation learning. Canonical CNNs tend to employ deep stacks of regionality perception modules to passively attain increasing theoretical receptive fields. However, such designs might be computationally inefficient and exhibit strong inability for discriminative self-relevant context recognition. To tackle this dilemma, &lt;em>context aggregation&lt;/em> modules were proposed to &lt;em>adaptively&lt;/em> emphasize the underlying contextual information and discard trivial redundancies of input feature. Formally, we summarize context aggregation as a family of network components that adaptively capture long-range interactions between two embedded features:&lt;/p>
&lt;p>\begin{equation}
O = \mathcal{S}\big(\mathcal{F}{\phi}(X), \mathcal{G}{\psi}(X)\big)
\end{equation}&lt;/p>
&lt;p>where $\mathcal{F}{\phi}(X)$ and $\mathcal{G}{\psi}(X)$ are the aggregation and context branches with different parameters. Optionally, the output can be transformed to the input dimension by a linear projection, $Z = OW{\phi}$. In contrast to regionality perception, context aggregation modules model the importance of each position on $X$ by the aggregation branch $\mathcal{F}{\phi}(X)$ and reweights the embedded feature from the context branch $\mathcal{G}{\psi}(X)$ by $\mathcal{S}(\cdot,\cdot)$. Consequently, context aggregation can be viewed as a prototype operation for different modules by designating diverse instantiations of $\mathcal{S}(\cdot,\cdot)$, $\mathcal{F}(\cdot)$, and $\mathcal{G}(\cdot)$.&lt;/p>
&lt;h3 id="22-methodology-and-expected-outcomes">2.2 Methodology and Expected Outcomes&lt;/h3>
&lt;p>Notably, the importance of each position on above-mentioned $X$ is calculated by global interactions of all other positions in $\mathcal{F}_{\phi}(\cdot)$ with a dot-product. This operation (e.g. self-attention mechanism) takes quadratic time and computational complexity leading to large computational overheads. T﻿o this end, how to perform context aggregation efficiently would be one of the main themes of my research.&lt;/p>
&lt;p>Inspired by human visual systems w﻿here human eyes perform ﻿egional sensing and global context perception simultaneously and efficiently, I have planned to address this issue from two directions: &lt;strong>aggregation efficiency&lt;/strong> and &lt;strong>region-context unity&lt;/strong>.
As my first try, I design a Multi-order Gated Aggregation (Moga) in &lt;em>Efficient Multi-order Gated Aggregation Network&lt;/em> with researchers from Prof. Stan Z. Li Lab, where discriminative local representations are modeled and contexutalized by assembled dilated $\mathrm{DWConv}$ and efficient gating mechanism in parellel mimicing what our human visual systems have done. Along this line, I plan to dig deep in more efficient, unified and discriminative visual recognition architecture following aforementioned aggregation efficiency and region-context unitydesign manner in the upcoming research career at Westlake University.&lt;/p>
&lt;h2 id="3-visual-pre-training">&lt;strong>3. Visual Pre-Training&lt;/strong>&lt;/h2>
&lt;h3 id="31-preliminary">3.1 Preliminary&lt;/h3>
&lt;p>Visual Pre-Training techniques improve visual representations f﻿rom the view of data utilization, w﻿hich can be divided mainly into three categories: &lt;strong>supervised&lt;/strong>, &lt;strong>unsupervised&lt;/strong>, and &lt;strong>cross-modal&lt;/strong> pretraining. The development of supervised pretraining is relatively mature and clear. The ImageNet dataset with about 15 million data scale, which laid the foundation for deep learning, is still the most commonly used dataset and has largely propelled the development of supervised visual pretraining. Although many current datasets have already provided extremely fine-grained pixel-level annotations, there is still a large amount of human-recognizable semantic information that fail to be annotated. More importantly, even with massive human labor, it is difficult for us to define a standardized set of criteria to complete the annotation of all human-recognizable semantic information. Therefore, it is not enough to solely conduct supervised pre-training using existing vision data. This appetite for data utilization has been first addressed in
NLP by self-supervised pretraining.&lt;/p>
&lt;p>Unsupervised pretraining, on the other hand, has undergone a long-lasting p﻿rocess of development. Starting from 2014, the first generation of geometry-based unsupervised pretraining methods emerged, such as judgment based on patched position relationships, based on image rotation, etc., while generative methods also evolved (generative methods can be traced back to an earlier period and are not repeated here). Unsupervised visual pre-training methods at this time are still significantly weaker than the fully-supervised ones. By 2019, contrast learning manners, with technical improvements, show the potential to outperform supervised counterparts on multifarious downstream benchmarks, and triggers most attentions of computer vision community. In 2021, the rise of visual transformer spawned a special class of generative tasks, namely Masked Image Modeling (MIM), which gradually became the ruling method. It is an autoencoding
approach that reconstructs the original image from the latent representation given its
partial masked observation.&lt;/p>
&lt;p>Unlike pure supervised and unsupervised pretraining paradigm, there is a class of methods in between, namely cross-modal pretraining. It uses weakly paired images and text as training material, avoiding bias from image supervised signals on the one hand, and learning weak semantics better than unsupervised methods on the other. Moreover, with the addition of transformer, the integration of visual and natural language is more natural and reasonable.&lt;/p>
&lt;h3 id="32-problem-analysis">3.2 Problem Analysis&lt;/h3>
&lt;p>From my perspective,unsupervised visual pre-training might be promising direction that best reflects the essence of computer vision, that is, &lt;strong>learning from degradation&lt;/strong>. Specifically, natural language as human-created data is semantically strong. However, image signals captured by objective sensors owns inherent low semantic density. Therefore, it is tough yet significant for visual recognition to boost the efficiency of data utilization rather than just conduct naive supervised learning. Degradation here refers to removal of partial information that is already presented and commands the model to recover it. Notice that the idea of visual pre-training is heavily influenced by natural language pre-training, but I believe that the two are fundamentally different and thus cannot be generalized.&lt;/p>
&lt;p>However, such degradation-based approaches have an insurmountable bottleneck, &lt;strong>the conflict between degradation strength and semantic consistency&lt;/strong>. Since there is no supervised signal and the representation quality is solely dependent on degradation, the degradation level must be strong enough. Nonetheless, images before and after degradation may not be semantically consistent, leading to pathological visual pre-training outcomes. For example, If the information with key representative features is removed in Masked Image Modeling tasks, strong misleading information will be introduced during the reconstruction processs, which can substantially decrease the representation capacity of target models.&lt;/p>
&lt;p>Moreover, there exists a non-negligible &lt;strong>disparity between large-scale pre-training and target domain fine-tuning&lt;/strong>: large pre-trained models are inclined to perceive comprehensive distribution of observed data and pursue corresponding optimal solutions, while fine-tuning tend to target a specific domain. Thus, the larger the model size, the more difficult it is to achieve domain adaptation, especially for the scenarios with large domain gap.&lt;/p>
&lt;p>In conclusion, how to design a unified pre-training pipeline to release deep networks' capabilities and better resolve the conflict between degradation intensity and semantic consistency, while allowing the upstream large models smoothly adapt to downstream tasks may be the main topic in the field of visual pre-training. Personally, it is worth trying to combine all the above v﻿isual pre-training paradigms into a mixed dataset containing a small amount of p﻿recisely labeled data, a medium amount of paired graphical data, and a large amount of images without corresponding labels and specially-designed pre-training strategies should be equipped on such mixed datasets.&lt;/p>
&lt;h2 id="4-conclusion">&lt;strong>4. Conclusion&lt;/strong>&lt;/h2>
&lt;p>Herein, I present a summerized research proposal towards efficient visual representation learning for my admission of Ph.D. program at Westlake University. Due to content limitations, I decide to mainly focus on the most familiar the context of computer vision. What I would like to emphasize here is that &lt;strong>I am also willing and desired to pursue constructive cross-disciplinary AI research&lt;/strong> (e.g. AI for Science) in my research career at Westlake University. I do believe that with my research attitude, skills and experience in deep learning, I am capable of making positive contributions to cross-disciplinary AI research as well.&lt;/p></description></item><item><title>[Details] Normalization in CV: How it works</title><link>https://zedongwang.netlify.app/post/details-normalization-in-cv-theory-and-applications/</link><pubDate>Tue, 12 Apr 2022 08:10:49 +0000</pubDate><guid>https://zedongwang.netlify.app/post/details-normalization-in-cv-theory-and-applications/</guid><description>&lt;p>&lt;em>&lt;strong>&lt;a href="https://zedongwang.netlify.app/post/getting-started/" target="_blank" rel="noopener">Get to know me and my academic blog.&lt;/a>&lt;/strong>&lt;/em>&lt;/p>
&lt;p>With going deep into my recent few-shot segmentation research, I have come across something interesting about the application of Normalization, which prompted me to delve into it again.
A series of normalization methods have bursted for years and have greatly propelled the development of deep algorithm applications. Armed with powerful packages like PyTorch and Tensorflow, it is quite normal and effortless to apply these Normalization function in various types of deep learning models. However, we may not fully fathom how it works and may remain have some misunderstood concepts. Therefore, I would try to explain the details behind Norm operations here.&lt;/p>
&lt;p>I will propel the blog based on the following questions:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Why we apply Norm in deep algorithms?&lt;/strong>&lt;/li>
&lt;li>&lt;strong>How many types of Norm we commonly use and how do they work in detail?&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Do we really solve the problems with Norm?&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Why don&amp;rsquo;t we apply Norm in Few-Shot learning tasks like others?&lt;/strong>&lt;/li>
&lt;/ul>
&lt;h2 id="-ics-problem">Ⅰ. ICS Problem&lt;/h2>
&lt;p>&lt;a href="https://arxiv.org/pdf/1502.03167.pdf" target="_blank" rel="noopener">Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift&lt;/a> revealed the problem of Internal Covariate Shift (ICS) and proposed a normalized-based method, Batch Normalization, to address this issue. Before we kick off the explanation of ICS, I would make a brief recall of the backpropagation process first, which will be illustrated elaborately in [Snacks] pattern of my blog soon.&lt;/p>
&lt;p>Define there is a three-layer neural network and a mid-layer-unit in the forward propagation as below:
$$\gamma*{k} =\sum*{i=1}^{h} W*{ik}^{T}x*{i} + b_{k}$$&lt;/p>
&lt;p>where $\gamma*{k}$ indicates the $k^{th}$ output of the hidden-layer, $W*{ik}$, $x*{i}$, $b*{k}$ and $h$ are the weights, the $i^{th}$ independent variable (input), the $k^{th}$ bias and the number of hidden-layer units, respectively.&lt;/p>
&lt;p>$$x&lt;em>i^t=x_i^{t-1}-lr\nabla f&lt;/em>{x_i}(x_i^{t-1})$$&lt;/p>
&lt;h2 id="-normalization-principle">Ⅱ. Normalization Principle&lt;/h2>
&lt;h3 id="1-general-normalization">1. General Normalization&lt;/h3>
&lt;h3 id="2-batch-normalization">2. Batch Normalization&lt;/h3>
&lt;h3 id="3-layer-normalization">3. Layer Normalization&lt;/h3>
&lt;h3 id="4-instance-normalization">4. Instance Normalization&lt;/h3>
&lt;h3 id="5-group-normalization">5. Group Normalization&lt;/h3>
&lt;h3 id="6-switchable-normalization">6. Switchable Normalization&lt;/h3>
&lt;h3 id="7-weight-normalization">7. Weight Normalization&lt;/h3>
&lt;h3 id="8-cosine-normalization">8. Cosine Normalization&lt;/h3>
&lt;h2 id="-drawbacks-in-specific-cases">Ⅲ. Drawbacks in specific cases&lt;/h2></description></item><item><title>[Details] Pervasive Deep Learning Optimizers</title><link>https://zedongwang.netlify.app/post/details-deep-learning-optimizers/</link><pubDate>Tue, 12 Apr 2022 04:32:16 +0000</pubDate><guid>https://zedongwang.netlify.app/post/details-deep-learning-optimizers/</guid><description>&lt;p>&lt;em>&lt;strong>&lt;a href="https://zedongwang.netlify.app/post/getting-started/" target="_blank" rel="noopener">Get to know me and my academic blog.&lt;/a>&lt;/strong>&lt;/em>&lt;/p></description></item><item><title>[Details] A Gentle Intro of Loss, Similarity &amp; Distance in CV</title><link>https://zedongwang.netlify.app/post/details-a-brief-intro-of-loss-function-similarity-and-distance/</link><pubDate>Tue, 12 Apr 2022 04:14:35 +0000</pubDate><guid>https://zedongwang.netlify.app/post/details-a-brief-intro-of-loss-function-similarity-and-distance/</guid><description>&lt;p>&lt;em>&lt;strong>&lt;a href="https://zedongwang.netlify.app/post/getting-started/" target="_blank" rel="noopener">Get to know me and my academic blog.&lt;/a>&lt;/strong>&lt;/em>&lt;/p></description></item></channel></rss>