Featured

Annotated-Transformer-English-to-Chinese-Translator

The Transformer from “Attention is All You Need” has been on a lot of people’s minds since 2017.

In this repo, I present an “annotated” version of the Transformer Paper in the form of a line-by-line implementation to build an English-to-Chinese translator with PyTorth deep learning framework.

Visit my blog for details and more background: https://cuicaihao.com/the-annotated-transformer-english-to-chinese-translator/ or visit my GitHub for the Jupyter Notebook (Annotated_Transformer_English_to_Chinese_Translator)

Google Publish A Survey Paper of Efficient Transformers

In this paper, the authors propose a taxonomy of efficient Transformer models, characterizing them by the technical innovation and primary use case.

Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of “X-former” models have been proposed – Reformer, Linformer, Performer, Longformer, to name a few – which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored “X-former” models, providing an organized and comprehensive overview of existing work and models across multiple domains.

In this paper, the authors propose a taxonomy of efficient Transformer models, characterizing them by the technical innovation and primary use case. Specifically, they review Transformer models that have applications in both language and vision domains, attempting to consolidate the literature across the spectrum. They also provide a detailed walk-through of many of these models and draw connections between them.

Paper Link: Efficient Transformers: A Survey

In the section 2, authors reviewed the background of the well-established Transformer architecture. Transformers are multi-layered architectures formed by stacking Transformer blocks on top of one another.

I really like the 2.4 section, when the authors summarised the the differences in the mode of usage of the Transformer block. Transformers can primarily be used in three ways, namely:

  1. Encoder-only (e.g., for classification)
  2. Decoder-only (e.g., for language modelling, GPT2/3)
  3. Encoder-decoder (e.g., for machine translation)

In section 3, they provide a high-level overview of efficient Transformer models and present a characterization of the different models in the taxonomy with respect to core techniques and primary use case. This is the core part of this paper covering 17 different papers’ technical details.

Summary of Efficient Transformer Models presented in chronological order of their first public disclosure.

In the last section, authors address the state of research pertaining to this class of efficient models on model evaluation, design trends, and more discussion on orthogonal efficiency effort, such as Weight Sharing, Quantization / Mixed precision, Knowledge Distillation, Neural Architecture Search (NAS) and Task Adapters.

In sum, this is a really good paper summarised all the important work around the Transformer model. It is also a good reference for researcher and engineering to be inspired and try these techniques for different models in their own projects.

FYI, here is my early post The Annotated Transformer: English-to-Chinese Translator with source code on GitHub, which is an “annotated” version of the 2017 Transformer paper in the form of a line-by-line implementation to build an English-to-Chinese translator via PyTorch ML framework. 

-END-

Reference:

Efficient Transformers: A Survey (https://arxiv.org/abs/2009.06732)

Street View Image Segmentation with PyTorch and Facebook Detectron2 (CPU+GPU)

In this post, I would like to share my practice with Facebook’s new Detectron2 package on macOS without GPU support for street view panoptic segmentation.  If you want to create the following video by yourself, this post is all you need. This demo video clip is from my car’s dashcam footages from Preston, Melbourne. I used the PyTorch and Detectron2 to create this video with segmentation masks.

Continue reading “Street View Image Segmentation with PyTorch and Facebook Detectron2 (CPU+GPU)”

Risk Level Calculation for Contact Tracing: an Example of Apple IOS framework

You know in Australia there is a ‘Covidsafe app‘  for everyone.


covidsafe-app_1 The COVIDSafe app speeds up contacting people exposed to coronavirus (COVID-19). This helps us support and protect you, your friends and family. Please read the content on this page before downloading.
At the end of the Australian COVID-19 pandemic, users will be prompted to delete the COVIDSafe app from their phone. This will delete all app information on a person’s phone. The information contained in the information storage system will also be destroyed at the end of the pandemic. 

Here is the introduction video:

So, all those descriptions are trying to tell your information is safe and your privacy is protected with this app. By the way, the COVIDSafe app is the only contact tracing app approved by the Australian Government. I think this means it is the first official one.

This post is for viewers who want to understand a little bit deeper technical details about the technology used in this app. I will quote the document from Apple and keep it as simple as possible. I am not an IOS developer. I am just as curious as you, trying to understand how it measures the risk. And I am not sure if the COVIDSafe app used apple’s framework, LOL~

My only sources are from the webpages of the Australian Government Department of Health and Apple [iOS Framework Document] Exposure Notification April 2020. You can click these Keywords to learn more background knowledge around this app: COVIDSafe, Mesh Network; GDPR; DP3T; Beacon.

So, according to Apple’s document, the following diagram illustrates the general format of Exposure Risk Level Calculation:

Example Contact Tracing Apple IOS 03

Exposure Risk Level Parameters

  • Transmission Risk — An app-defined flexible value to tag a specific positive key. This value could be tagged based on symptoms, level of diagnosis verification, or other determination from the app or health authority.
  • Duration (measured by API) — Cumulative duration of the exposure. Days (measured by API) – Days since the exposure incident.
  • Attenuation (measured by API) – Minimum Bluetooth signal strength attenuation (Transmission Power subtract RSSI).
  • Level Value: The value, ranging from 1 to 8, that the app assigns to each Level in each of the Exposure Risk Level Parameters.
  • Level: The eight levels contained within each Exposure Risk Level Parameter.

Exposure Risk Level Parameter Weights (A, B, C, D)

  • The weights defined by the app (ranging from 0-100) that assign the relative importance to each of the Exposure Risk Level Parameters.

 

Continue reading “Risk Level Calculation for Contact Tracing: an Example of Apple IOS framework”

How to Build an Artificial Intelligent System (II)

This post is following upgrade with respect to the early post How to Build an Artificial Intelligent System (I) The last one is focused on introducing the six phases of the building an intelligent system, and explaining the details of the Problem Assesment phase.

In the following content, I will address the rest phases and key steps during the building process.  Readers can download the keynotes here: Building an Intelligent System with Machine Learning.

Continue reading “How to Build an Artificial Intelligent System (II)”

How to Build an Artificial Intelligent System (I)

Phase 1: Problem assessment – Determine the problem’s characteristics.

What is an intelligent system?

The process of building Intelligent knowledge-based system has been called knowledge engineering since the 80s. It usually contains six phases: 1. Problem assessment; 2. Data and knowledge acquisition; 3. Development of a prototype system; 4. Development of a complete system; 5. Evaluation and revision of the system; 6. Integration and maintenance of the system [1].

Continue reading “How to Build an Artificial Intelligent System (I)”

Increasing Transparency into What It Takes to Achieve Performance Gains of Machine Learning Algorithms

The computations required for Deep Learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018. AI could account for as much as one-tenth of the world’s electricity use by 2025 according to this article [1].

Continue reading “Increasing Transparency into What It Takes to Achieve Performance Gains of Machine Learning Algorithms”

Just Got a Reviewer Certificate from Data Mining and Knowledge Discovery (WIREs)

Thanks to the Editors and Board of WIREs for supporting me. As an independent reviewer, I will be fair to everyone and never give in to the “scientific mafia” and “citation cartels”.

Data Mining and Knowledge Discovery (WIREs) (Impact Factor: 2.541)

WIREs_Reviewer_Certificate.PNG