Question

Formatted question description: https://leetcode.ca/all/393.html

A character in UTF8 can be from 1 to 4 bytes long, subjected to the following rules:

For 1-byte character, the first bit is a 0, followed by its unicode code.
For n-bytes character, the first n-bits are all one's, the n+1 bit is 0, followed by n-1 bytes with most significant 2 bits being 10.
This is how the UTF-8 encoding would work:

   Char. number range  |        UTF-8 octet sequence
      (hexadecimal)    |              (binary)
   --------------------+---------------------------------------------
   0000 0000-0000 007F | 0xxxxxxx
   0000 0080-0000 07FF | 110xxxxx 10xxxxxx
   0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
   0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
Given an array of integers representing the data, return whether it is a valid utf-8 encoding.

Note:
The input is an array of integers. Only the least significant 8 bits of each integer is used to store the data. This means each integer represents only 1 byte of data.

Example 1:

data = [197, 130, 1], which represents the octet sequence: 11000101 10000010 00000001.

Return true.
It is a valid utf-8 encoding for a 2-bytes character followed by a 1-byte character.

Example 2:

data = [235, 140, 4], which represented the octet sequence: 11101011 10001100 00000100.+

Return false.
The first 3 bits are all one's and the 4th bit is 0 means it is a 3-bytes character.
The next byte is a continuation byte which starts with 10 and that's correct.
But the second continuation byte does not start with 10, so it is invalid.

Algorithm

For any byte B in UTF-8 encoding, if the first bit of B is 0, then B independently represents a character (ASCII code);

If the first bit of B is 1, and the second bit is 0, then B is a byte (non-ASCII character) in a multibyte character;
If the first two bits of B are 1, and the third bit is 0, then B is the first byte in the character represented by two bytes;
If the first three bits of B are 1, and the fourth bit is 0, then B is the first byte of the character represented by three bytes;
If the first four bits of B are 1, and the fifth bit is 0, then B is the first byte of the character represented by four bytes;

Therefore, for any byte in UTF-8 encoding,

  • According to the first digit, it can be judged whether it is an ASCII character;
  • According to the first two bits, it can be judged whether the byte is the first byte of a character encoding;
  • According to the first four bits (if the first two bits are both 1), it can be determined that the byte is the first byte of the character code, and it can be judged that the corresponding character is represented by several bytes;
  • According to the first five bits (if the first four bits are 1), it can be judged whether there is an error in the encoding or whether there is an error in the data transmission process.

Code

C++

class Solution {
public:
    bool validUtf8(vector< int >& data) {
        int n = data.size();
        for (int i = 0; i < n; ++i) {
            if (data[i] < 0b10000000) {
                continue;
            } else {
                int cnt = 0, val = data[i];
                for (int j = 7; j >= 1; --j) {
                    if (val >= pow(2, j)) ++cnt;
                    else break;
                    val -= pow(2, j);
                }
                if (cnt == 1 || cnt > 4 || cnt > n - i) return false;
                for (int j = i + 1; j < i + cnt; ++j) {
                    if (data[j] > 0b10111111 || data[j] < 0b10000000) return false;
                } 
                i += cnt - 1;
            }
        }
        return true;
    }
};

All Problems

All Solutions