Advice of Scroller Animation Part I

Advice

The Scroller widget doesn’t actually do much of the work at all for you. It doesn’t fire any callbacks, it doesn’t animate anything, it just responds to various method calls.

So what good is it? Well, it does all of the calculation for e.g. a fling for you, which is handy. So what you’d generally do is create a Runnable that repeatedly asks the Scroller, “What should my scroll position be now? Are we done flinging yet?” Then you repost that runnable on a Handler (usually on the View) until the fling is done.

Here’s an example from a Fragment I’m working on right now:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
private class Flinger implements Runnable {
private final Scroller scroller;

private int lastX = 0;

Flinger() {
scroller = new Scroller(getActivity());
}

void start(int initialVelocity) {
int initialX = scrollingView.getScrollX();
int maxX = Integer.MAX_VALUE; // or some appropriate max value in your code
scroller.fling(initialX, 0, initialVelocity, 0, 0, maxX, 0, 10);
Log.i(TAG, "starting fling at " + initialX + ", velocity is " + initialVelocity + "");

lastX = initialX;
getView().post(this);
}

public void run() {
if (scroller.isFinished()) {
Log.i(TAG, "scroller is finished, done with fling");
return;
}

boolean more = scroller.computeScrollOffset();
int x = scroller.getCurrX();
int diff = lastX - x;
if (diff != 0) {
scrollingView.scrollBy(diff, 0);
lastX = x;
}

if (more) {
getView().post(this);
}
}

boolean isFlinging() {
return !scroller.isFinished();
}

void forceFinished() {
if (!scroller.isFinished()) {
scroller.forceFinished(true);
}
}
}

reference articles:

ref1

Getting Start Make Module AAR for Android Project (Remote)

Introduce

AAR Format

The ‘aar’ bundle is the binary distribution of an Android Library Project.

The file extension is .aar, and the maven artifact type should be aar as well, but the file itself a simple zip file with the following entries:

  • /AndroidManifest.xml (mandatory)
  • /classes.jar (mandatory)
  • /res/ (mandatory)
  • /R.txt (mandatory)
  • /assets/ (optional)
  • /libs/*.jar (optional)
  • /jni//*.so (optional)
  • /proguard.txt (optional)
  • /lint.jar (optional)

These entries are directly at the root of the zip file.

The R.txt file is the output of aapt with –output-text-symbols.

In android’s view, bitmap,graphics, opengl and animation are included in graphics. The classes about shape will be introduced in this article. The other content will be introduced later.

Different between AAR and JAR

  • *.jar: contains only the class file and the manifest file, does not contain resource files, such as pictures, all res in the file.

  • *.Aar: contains all resources, class and res resource files are all included

Getting Started

Build a AAR

Create a new project, create a new module in the project, select Android jar, and then the next step next. After the new you will see in your module in the first line of the build.gradle file apply plugin: ‘com.android.library’, which represents it is a jar. Then run with

install```, you will see an aar file in your project directory.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122

### uploadBinary

1. Edit the buid.gradle of mxaar
```java
apply plugin: 'com.android.library'
apply plugin: 'com.github.dcendents.android-maven'
apply plugin: 'com.jfrog.bintray'

android {
compileSdkVersion 24
buildToolsVersion "24.0.1"

defaultConfig {
minSdkVersion 14
targetSdkVersion 24
versionCode 1
versionName "1.0"

testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"

}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}

lintOptions {
disable 'InvalidPackage'
checkReleaseBuilds false
abortOnError false
}

}

dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
exclude group: 'com.android.support', module: 'support-annotations'
})
compile 'com.android.support:appcompat-v7:24.2.0'
testCompile 'junit:junit:4.12'
}
version = "1.0.0"

def siteUrl = 'https://github.com/dachmx/mxaar' //

def gitUrl = 'https://github.com/dachmx/mxaar.git' // git

group = "com.dachmx" //

install {
repositories.mavenInstaller {
// This generates POM.xml with proper parameters
pom {
project {
packaging 'aar'
name 'MxAAR For Android' // Title of the package
url siteUrl
// Set your license
licenses {
license {
name 'The Apache Software License, Version 2.0'
url 'http://www.apache.org/licenses/LICENSE-2.0.txt'
}
}
developers {
developer {
id 'dachmx'
name 'dachmx'
email 'dachmx@outlook.com'
}
}
scm {
connection gitUrl
developerConnection gitUrl
url siteUrl
}
}
}
}
}

task sourcesJar(type: Jar) {
from android.sourceSets.main.java.srcDirs
classifier = 'sources'
}

task javadoc(type: Javadoc) {
source = android.sourceSets.main.java.srcDirs
classpath += project.files(android.getBootClasspath().join(File.pathSeparator))
}

task javadocJar(type: Jar, dependsOn: javadoc) {
classifier = 'javadoc'
from javadoc.destinationDir
}

artifacts {
archives javadocJar
archives sourcesJar
}

Properties properties = new Properties()
properties.load(project.rootProject.file('local.properties').newDataInputStream())
bintray {
user = properties.getProperty("bintray.user")
key = properties.getProperty("bintray.apikey")

configurations = ['archives']
pkg {
userOrg = "dachmxorg" # new version of bintray, repo is in the organization
repo = "mavean"
name = "MxAAR"
websiteUrl = siteUrl
vcsUrl = gitUrl
licenses = ["Apache-2.0"]
publish = true
}
}

  1. upload
install```
1
2

``` ./gradlew bintrayUpload
  1. use aar
    add repo
    1
    2
    3
    4
    maven{
    url 'https://dl.bintray.com/dachmxorg/mavean/'

    }

add compile dependencies

1
compile 'com.dachmx:mxaar:1.0.0'

  1. Attention

The most important is that you must add userOrg, otherwise can’t find the path upload.

ref:

New Version Bintray,upload aar to jcenter (Used in the article)

build aar with gradle

android-reference-local-aar

Analysis on The SourceCode of Handler, Looper and Message Queue

Introduce

Android can be updated through the Handler UI to change the main thread, the update UI can only be updated in the main thread, and in order to allow other threads can control UI changes, Android provides a mechanism Handler, Looper and MessageQueue together Collaboration to achieve the purpose of other threads update the UI.

For example of handler:

1
2
3
4
5
6
7
private Handler mHandler = new Handler() {
@Override
public void handleMessage(Message msg) {
tv.setText("mHandler change UI");
super.handleMessage(msg);
}
};

Generally do not see Looper and MessageQueue, then they are where to call and how to collaborate in the main thread will not explicitly call Looper but in the ActivityThread.main method default call.

SourceCode

ActivityThread.main

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
public static void main(String[] args) {
Trace.traceBegin(Trace.TRACE_TAG_ACTIVITY_MANAGER, "ActivityThreadMain");
SamplingProfilerIntegration.start();

// CloseGuard defaults to true and can be quite spammy. We
// disable it here, but selectively enable it later (via
// StrictMode) on debug builds, but using DropBox, not logs.
CloseGuard.setEnabled(false);

Environment.initForCurrentUser();

// Set the reporter for event logging in libcore
EventLogger.setReporter(new EventLoggingReporter());

// Make sure TrustedCertificateStore looks in the right place for CA certificates
final File configDir = Environment.getUserConfigDirectory(UserHandle.myUserId());
TrustedCertificateStore.setDefaultUserDirectory(configDir);

Process.setArgV0("<pre-initialized>");

Looper.prepareMainLooper();//创建Looper

ActivityThread thread = new ActivityThread();
thread.attach(false);

if (sMainThreadHandler == null) {
sMainThreadHandler = thread.getHandler();
}

if (false) {
Looper.myLooper().setMessageLogging(new
LogPrinter(Log.DEBUG, "ActivityThread"));
}

// End of event ActivityThreadMain.
Trace.traceEnd(Trace.TRACE_TAG_ACTIVITY_MANAGER);
Looper.loop();//开启Looper循环

throw new RuntimeException("Main thread loop unexpectedly exited");
}

As shown above,

is called.
1
2
3
4
5
6
7
8

In the prepareMainLooper method called prepare and will be found through the preparation of it is to create a Looper, and assign it to the sThreadLocal. At the same time through the myLooper method to obtain the current thread Looper. Let's look at what new Looper (quitAllowed) initializes

```java
private Looper(boolean quitAllowed) {
mQueue = new MessageQueue(quitAllowed);
mThread = Thread.currentThread();
}

Here we finally see the MessageQueue, it created a MessageQueue. The message queue is used to save the subsequent Message. Go back to ActivityThread.main method, found that it calls Looper.loop () is used to open Looper loop, listening to Message Queue MessageQueue the news.

Loop

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
public static void loop() {
final Looper me = myLooper();//获取Looper
if (me == null) {
throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");
}
final MessageQueue queue = me.mQueue;//获取消息队列

// Make sure the identity of this thread is that of the local process,
// and keep track of what that identity token actually is.
Binder.clearCallingIdentity();
final long ident = Binder.clearCallingIdentity();

for (;;) {
Message msg = queue.next(); // might block
if (msg == null) {
// No message indicates that the message queue is quitting.
return;
}

// This must be in a local variable, in case a UI event sets the logger
final Printer logging = me.mLogging;
if (logging != null) {
logging.println(">>>>> Dispatching to " + msg.target + " " +
msg.callback + ": " + msg.what);
}

final long traceTag = me.mTraceTag;
if (traceTag != 0) {
Trace.traceBegin(traceTag, msg.target.getTraceName(msg));
}
try {
msg.target.dispatchMessage(msg);//通过Handler分发消息
} finally {
if (traceTag != 0) {
Trace.traceEnd(traceTag);
}
}

if (logging != null) {
logging.println("<<<<< Finished to " + msg.target + " " + msg.callback);
}

// Make sure that during the course of dispatching the
// identity of the thread wasn't corrupted.
final long newIdent = Binder.clearCallingIdentity();
if (ident != newIdent) {
Log.wtf(TAG, "Thread identity changed from 0x"
+ Long.toHexString(ident) + " to 0x"
+ Long.toHexString(newIdent) + " while dispatching to "
+ msg.target.getClass().getName() + " "
+ msg.callback + " what=" + msg.what);
}

msg.recycleUnchecked();
}
}

Looper in the first loop to obtain the current thread, but also to the Looper in the MessageQueue, that Looper has been bound with the current thread. In the back to open a for the death cycle and found it to do the event is constantly removed from the message queue message, and finally to the msg.target call its dispatchMessage method, then the target is what? We enter Message

The struct of Message as follows:

1
2
3
4
5
6
7
8
9
10
11
/*package*/ int flags;
/*package*/ long when;

/*package*/ Bundle data;

/*package*/ Handler target;

/*package*/ Runnable callback;

// sometimes we store linked lists of these things
/*package*/ Message next;

deep it , and found dispatchMessage call handler

Handler

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public Handler(Callback callback, boolean async) {
if (FIND_POTENTIAL_LEAKS) {
final Class<? extends Handler> klass = getClass();
if ((klass.isAnonymousClass() || klass.isMemberClass() || klass.isLocalClass()) &&
(klass.getModifiers() & Modifier.STATIC) == 0) {
Log.w(TAG, "The following Handler class should be static or leaks might occur: " +
klass.getCanonicalName());
}
}

mLooper = Looper.myLooper();
if (mLooper == null) {
throw new RuntimeException(
"Can't create handler inside thread that has not called Looper.prepare()");
}
mQueue = mLooper.mQueue;
mCallback = callback;
mAsynchronous = async;
}

Through the Handler initialization, it acquires the Looper of the thread it is in, and also gets the message queue in Looper. Of course, if the thread Looper is empty, then it will throw an exception, which explains why the non-main thread to create a Handler to call Looper.prepare and Looper.loop and the main thread is not required, because it has been called by default .

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public void dispatchMessage(Message msg) {
if (msg.callback != null) {
handleCallback(msg);
} else {
if (mCallback != null) {
if (mCallback.handleMessage(msg)) {
return;
}
}
handleMessage(msg);
}
}
private static void handleCallback(Message message) {
message.callback.run();
}

The struct between handler and message can be discribled as a figure:
handlermessageloop

Summary

To summarize the flow between them. First of all to create a Handler in the thread where the Handler must have a Looper, if the default in the main thread to help us achieve, and other threads must call Looper.prepare to create a Looper call Looper.loop open the handling of the message. Each Looper has a MessageQueue it is used to store the Message, Handler through post or send .. and a series of operations through Looper message into the message queue, and Looper by opening an infinite loop to monitor the Message processing, and constantly from the MessageQueue out of the message, and to the current Looper handler dispatchMessage binding distribution, the final call Runnable the Run or Handler HandlerMessage method on the message for the final treatment.

ref:

source code of loop, handler and message(Used in the article)

A Simple Example of Hot Fix Like Wechat

Introduction

Not long ago read a blog, about the micro-hot patch, and instantly been attracted, so decided to take the time to study, a very good article, recommend you look at the link will be attached at the end of this article.

principle

Let’s start by assuming that old.apk is the older version of apk (the version with the bug) and new.apk is the new one (bugfix version); assuming that they are both bug-related and code-independent Resource level replacement).

We get two apk classes.dex documents were named old.dex and new.dex, the ultimate goal is to replace the old.dex old.dex want to run the program in old.apk. To this end we have done the following steps:

By using the bsdiff tool, get the old.dex difference with new.dex classes.patch (done on the computer side).

In the mobile terminal to get to install the old.apk, extract access to classes.dex, that old.dex.

In the phone side through the bspatch tool (internal. So) old.dex and classes.patch combined into new.dex.

In the application’s attachBaseContext, construct a DexclassLoader, insert new.dex into the BaseClassLoader’s dexElements before the new.dex load in advance to achieve the effect of hot patch.

The problems encountered in the process

In the program-side access apk path getApplicationInfo (). SourceDir, originally thought I have apk path since the read and write permissions, I simply directly from this.zip file to read classes.dex file can be, there is no need to Copy out. But in the process of achieving the Debug found that, in fact, they did not read this. Zip file permissions to the contents of subdirectories (why?) Under investigation slowly).

In the translation dspatch.so encountered “dlopen failed can not locate symbol” signal “”, This really good pit, said the reasons for it, is seen in the stackoverflow, android-21 before, signal.so is a Inline functionality. But now is no longer. How to solve it?

  • I. Use app_PLATFORM: = android-15 in Application.mk when ndk is used (: = = followed by the lowest version of your compiled sdk).

  • II. Set your compile-time version of sdk to android-21 before.

Specific links are as follows: hateful can not locate symbol.

In the course of the experiment, the idea of ??direct loading replaced. Dex the apk, so that all resources can be obtained from the dex, but found is a special sb, Apk file structure analysis. Since I want to replace the entire resource file, why not do the old.apk and new.apk difference, access to the difference patch, and then through the old.apk and patch synthesis of the new apk, direct load. (This is just a vision, the follow-up will go to verify).

Deficiencies and problems

To verify the micro-hot patch is to write a very simple demo, to achieve the basic functions, but did not do in-depth verification of complex issues, such as the growth of the package with the memory consumption, time, success rate.

At this stage can be verified whether the confusion or non-confusing, this principle to achieve the hot patch is feasible. This should be easier to do than the NuWa hot fix before making a patch. After all, one is positive and the other is in reverse, just the difference between the old and new packages.

But also to study the next hot patch If you load the resource file to enable it to take effect, stands to reason is feasible.

ref:

WeChat Android hot patch practice of the road of evolution(Used in the article)

fake wechat hot fix

cannot locate symbol “signal”

Apk file structure analysis 1

Getting Start Make Module AAR for Android Project (Local)

Introduce

AAR Format

The ‘aar’ bundle is the binary distribution of an Android Library Project.

The file extension is .aar, and the maven artifact type should be aar as well, but the file itself a simple zip file with the following entries:

  • /AndroidManifest.xml (mandatory)
  • /classes.jar (mandatory)
  • /res/ (mandatory)
  • /R.txt (mandatory)
  • /assets/ (optional)
  • /libs/*.jar (optional)
  • /jni//*.so (optional)
  • /proguard.txt (optional)
  • /lint.jar (optional)

These entries are directly at the root of the zip file.

The R.txt file is the output of aapt with –output-text-symbols.

In android’s view, bitmap,graphics, opengl and animation are included in graphics. The classes about shape will be introduced in this article. The other content will be introduced later.

Different between AAR and JAR

  • *.jar: contains only the class file and the manifest file, does not contain resource files, such as pictures, all res in the file.

  • *.Aar: contains all resources, class and res resource files are all included

Getting Started

Build a AAR

Create a new project, create a new module in the project, select Android jar, and then the next step next. After the new you will see in your module in the first line of the build.gradle file apply plugin: ‘com.android.library’, which represents it is a jar. Then run with

assembleRelease```, you will see an aar file in your project directory.
1
2
3
4
5
6
7
8
9
10
11
12
13

### Use AAR

1. the aar file on a file directory, such as on the libs directory

2. in the app build.gradle file to add the following

```java
Repositories {
     FlatDir {
         Dirs 'libs' // this way we can find the .aar file in libs folder
     }}
}}

  1. after the addition of a project to add a gradle dependency will be convenient to reference the library
1
2
3
Dependencies {
     Compile (name: 'test', ext: 'aar')
}}

ref:

build aar with gradle

android-reference-local-aar

Monitor and manage the production environment

Introduction

The spring-boot-actuator module provides a module for monitoring and managing the production environment. You can use http, jmx, ssh, telnet, etc. to manage and monitor applications. Auditing,

Health, and data gathering are automatically added to the application.

Implemetation

A simple spring boot proj

First, write a basic spring boot project.

Maven-based projects add ‘starter’ dependencies

1
2
3
4
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

When startting, some log messages can be found.

1
2
2014-08-28 09:57:40.953  INFO 4064 --- [           main] o.s.b.a.e.mvc.EndpointHandlerMapping     : Mapped "{[/trace],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.EndpointMvcAdapter.invoke()
2014-08-28 09:57:40.954 INFO 4064 --- [ main] o.s.b.a.e.mvc.EndpointHandlerMapping : Mapped "{[/mappings],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.Object org.springframework.boot.actuate.endpoint.mvc.EndpointMvcAdapter.invoke()

explation of the log

Specific description:

ID desc Sensitive
autoconfig Displays an auto-configuration report showing all auto-configuration candidates and why they were or were not applied true
beans Displays a complete list of all Spring Beans in an application true
configprops Displays a collated list of all @ConfigurationProperties true
dump Execute a thread dump true
env Exposure from the Spring ConfigurableEnvironment property true
health Showing the health information of the application (a simple ‘status’ is displayed when an unauthenticated connection is accessed, and all information is displayed using an authenticated connection) false
info Display any application information false
metrics Displays the currently applied ‘metrics’ information true
mappings Displays a collated list of all @RequestMapping paths true
shutdown Allows the app to turn off gracefully (not enabled by default) true
trace Display trace information (defaults to some recent HTTP requests) true

Health Check

For example: http://localhost:7231/health
You can get results

1
{Status: "UP",}

Add in the application configuration

Endpoints.health.sensitive = false
At the second visit http://localhost:7231/health

1
2
3
{Status: "UP", diskSpace: {status: "UP", free: 32516145152, threshold: 10485760},
Db: {status: "UP", database: "Microsoft SQL Server", hello: 1440729256277}
}}

You can check for health information in some other cases. The following HealthIndicators are automatically configured by Spring Boot (at the appropriate time):

Name Description
DiskSpaceHealthIndicator Low disk space detection
DataSourceHealthIndicator Checks whether the connection can be obtained from the DataSource
MongoHealthIndicator checks whether a Mongo database is up (up)
RabbitHealthIndicator Checks whether a Rabbit server is up (up)
RedisHealthIndicator checks whether a Redis server is up (up)
SolrHealthIndicator checks whether a Solr server is up (up)

Customization Of course, you can register to achieve the HealthIndicator interface Spring beans, Health response needs to include a status and optional for display details.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Import org.springframework.boot.actuate.health.HealthIndicator;
Import org.springframework.stereotype.Component;

@Component
Public class MyHealth implements HealthIndicator {

@Override
Public health health () {
Int errorCode = check (); // perform some specific health check
If (errorCode! = 0) {
Return Health.down (). WithDetail ( "Error Code", errorCode) .build ();
}
Return Health.up (). Build ();
}
}

Trace

Visit http://localhost:7231/trace to see the results, default to some of the latest HTTP requests

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[
{
Timestamp: 1440728799269,
Info: {
Method: "GET",
Path: "/ health",
Headers: {
Request: {
Host: "localhost: 7231",
Connection: "keep-alive",
Q = 0.9, image / webp, * / *; q = 0.8 ", and the following expression is acceptable:" text / html, application / xhtml + xml, application / xml;
User-agent: "Mozilla / 5.0 (Windows NT 6.1; WOW64) AppleWebKit / 537.36 (KHTML, like Gecko) Chrome / 38.0.2125.122 Safari / 537.36"
Accept-encoding: "gzip, deflate, sdch",
Accept-language: "zh-CN, zh = q, 0.8; en; q = 0.6"
Ra-ver: "3.0.7",
Ra-sid: "74E754D8-20141117-085628-93e7a4-1dd60b"
},
Response: {
X-Application-Context: "executecount: integration: 7231",
Content-Type: "application / json; charset = UTF-8",
Transfer-Encoding: "chunked",
Content-Encoding: "gzip",
Vary: "Accept-Encoding",
date: "Fri, 28 Aug 2015 02:26:39 GMT",
Status: "200"
}
}
}
}
]

Look at InMemoryTraceRepository, the default is 100 events, if necessary, you can define your own InMemoryTraceRepository instance. If desired, you can create your own alternative TraceRepository implementation.

ref

Building a RESTful Web Service with Spring Boot Actuator

Spring Boot Reference Guide

supervise the health of spring boot

Analysis on Layout for Android

current situation

If you don’t konw the layout of android phone screen, you couldn’t design a app you want. Where you can place controller and where you can place notification. These are you must know before you coding.

Analysis of Layout

Desc on Layout

The action bar is a dedicated piece of real estate at the top of each screen that is generally persistent throughout the app.

It provides several key functions:

  • Makes important actions prominent and accessible in a predictable way (such as New or Search).
  • Supports consistent navigation and view switching within apps.
  • Reduces clutter by providing an action overflow for rarely used actions.
  • Provides a dedicated space for giving your app an identity.

If you’re new to writing Android apps, note that the action bar is one of the most important design elements you can implement. Following the guidelines described here will go a long way toward making your app’s interface consistent with the core Android apps

Use of Every Bars

status bar height

1
2
3
Rect frame = new Rect();    
getWindow().getDecorView().getWindowVisibleDisplayFrame(frame);
int statusBarHeight = frame.top;

title bar height

1
2
3
int contentTop = getWindow().findViewById(Window.ID_ANDROID_CONTENT).getTop();   
//statusBarHeight is the height of status bar
int titleBarHeight = contentTop - statusBarHeight

screen height

1
2
3
4
5
6
7
8
9
10
11
WindowManager windowManager = getWindowManager();   
Display display = windowManager.getDefaultDisplay();
screenWidth = display.getWidth();
screenHeight = display.getHeight();

//or

DisplayMetrics dm = new DisplayMetrics();
this.getWindowManager().getDefaultDisplay().getMetrics(dm);//this指当前activity
screenWidth =dm.widthPixels;
screenHeight =dm.heightPixels;

combine codes above as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
 // status bar height
int statusBarHeight = 0;
int resourceId = getResources().getIdentifier("status_bar_height", "dimen", "android");
if (resourceId > 0) {
statusBarHeight = getResources().getDimensionPixelSize(resourceId);
}

// action bar height
int actionBarHeight = 0;
final TypedArray styledAttributes = getActivity().getTheme().obtainStyledAttributes(
new int[] { android.R.attr.actionBarSize }
);
actionBarHeight = (int) styledAttributes.getDimension(0, 0);
styledAttributes.recycle();

// navigation bar height
int navigationBarHeight = 0;
int resourceId = getResources().getIdentifier("navigation_bar_height", "dimen", "android");
if (resourceId > 0) {
navigationBarHeight = resources.getDimensionPixelSize(resourceId);
}

Getting Start Analysizing Multidex of Android

Introduce

As the Android platform has continued to grow, so has the size of Android apps. When your application and the libraries it references reach a certain size, you encounter build errors that indicate your app has reached a limit of the Android app build architecture. Earlier versions of the build system report this error as follows:

1
2
Conversion to Dalvik format failed:
Unable to execute dex: method ID not in [0, 0xffff]: 65536

More recent versions of the Android build system display a different error, which is an indication of the same problem:

1
2
3
trouble writing output:
Too many field references: 131000; max is 65536.
You may try using --multi-dex option.

Both these error conditions display a common number: 65,536. This number is significant in that it represents the total number of references that can be invoked by the code within a single Dalvik Executable (dex) bytecode file. If you have built an Android app and received this error, then congratulations, you have a lot of code! This document explains how to move past this limitation and continue building your app.

Note: The guidance provided in this document supersedes the guidance given in the Android Developers blog post Custom Class Loading in Dalvik.

About the 64K Reference Limit

Android application (APK) files contain executable bytecode files in the form of Dalvik Executable (DEX) files, which contain the compiled code used to run your app. The Dalvik Executable specification limits the total number of methods that can be referenced within a single DEX file to 65,536—including Android framework methods, library methods, and methods in your own code. In the context of computer science, the term Kilo, K, denotes 1024 (or 2^10). Because 65,536 is equal to 64 X 1024, this limit is referred to as the ‘64K reference limit’.

Getting past this limit requires that you configure your app build process to generate more than one DEX file, known as a multidex configuration.

Multidex support prior to Android 5.0

Versions of the platform prior to Android 5.0 (API level 21) use the Dalvik runtime for executing app code. By default, Dalvik limits apps to a single classes.dex bytecode file per APK. In order to get around this limitation, you can use the multidex support library, which becomes part of the primary DEX file of your app and then manages access to the additional DEX files and the code they contain.

Note: If your project is configured for multidex with minSdkVersion 20 or lower, and you deploy to target devices running Android 4.4 (API level 20) or lower, Android Studio disables Instant Run.

Multidex support for Android 5.0 and higher

Android 5.0 (API level 21) and higher uses a runtime called ART which natively supports loading multiple dex files from application APK files. ART performs pre-compilation at application install time which scans for classes(..N).dex files and compiles them into a single .oat file for execution by the Android device. For more information on the Android 5.0 runtime, see Introducing ART.

Note: While using Instant Run, Android Studio automatically configures your app for multidex when your app’s minSdkVersion is set to 21 or higher. Because Instant Run only works with the debug version of your app, you still need to configure your release build for multidex to avoid the 64K limit.

Avoiding the 64K Limit

Before configuring your app to enable use of 64K or more method references, you should take steps to reduce the total number of references called by your app code, including methods defined by your app code or included libraries. The following strategies can help you avoid hitting the dex reference limit:

Review your app’s direct and transitive dependencies - Ensure any large library dependency you include in your app is used in a manner that outweighs the amount of code being added to the application. A common anti-pattern is to include a very large library because a few utility methods were useful. Reducing your app code dependencies can often help you avoid the dex reference limit.
Remove unused code with ProGuard - Configure the ProGuard settings for your app to run ProGuard and ensure you have shrinking enabled for release builds. Enabling shrinking ensures you are not shipping unused code with your APKs.

Using these techniques can help you avoid the build configuration changes required to enable more method references in your app. These steps can also decrease the size of your APKs, which is particularly important for markets where bandwidth costs are high

Configuring Your App for Multidex with Gradle

The Android plugin for Gradle available in Android SDK Build Tools 21.1 and higher supports multidex as part of your build configuration. Make sure you update the Android SDK Build Tools tools and the Android Support Repository to the latest version using the SDK Manager before attempting to configure your app for multidex.

Setting up your app development project to use a multidex configuration requires that you make a few modifications to your app development project. In particular you need to perform the following steps:

  • Change your Gradle build configuration to enable multidex

  • Modify your manifest to reference the MultiDexApplication class

Modify the module-level build.gradle file configuration to include the support library and enable multidex output, as shown in the following code snippet:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
android {
compileSdkVersion 21
buildToolsVersion "21.1.0"

defaultConfig {
...
minSdkVersion 14
targetSdkVersion 21
...

// Enabling multidex support.
multiDexEnabled true
}
...
}

dependencies {
compile 'com.android.support:multidex:1.0.0'
}

In your manifest add the MultiDexApplication class from the multidex support library to the application element.

1
2
3
4
5
6
7
8
9
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.android.multidex.myapplication">
<application
...
android:name="android.support.multidex.MultiDexApplication">
...
</application>
</manifest>

When these configuration settings are added to an app, the Android build tools construct a primary dex (classes.dex) and supporting (classes2.dex, classes3.dex) as needed. The build system will then package them into an APK file for distribution.

Note: If your app uses extends the Application class, you can override the attachBaseContext() method and call MultiDex.install(this) to enable multidex. For more information, see the MultiDexApplication reference documentation.

Limitations of the multidex support library

The multidex support library has some known limitations that you should be aware of and test for when you incorporate it into your app build configuration:

The installation of .dex files during startup onto a device’s data partition is complex and can result in Application Not Responding (ANR) errors if the secondary dex files are large. In this case, you should apply code shrinking techniques with ProGuard to minimize the size of dex files and remove unused portions of code.

Applications that use multidex may not start on devices that run versions of the platform earlier than Android 4.0 (API level 14) due to a Dalvik linearAlloc bug (Issue 22586). If you are targeting API levels earlier than 14, make sure to perform testing with these versions of the platform as your application can have issues at startup or when particular groups of classes are loaded. Code shrinking can reduce or possibly eliminate these potential issues.

Applications using a multidex configuration that make very large memory allocation requests may crash during run time due to a Dalvik linearAlloc limit (Issue 78035). The allocation limit was increased in Android 4.0 (API level 14), but apps may still run into this limit on Android versions prior to Android 5.0 (API level 21).

There are complex requirements regarding what classes are needed in the primary dex file when executing in the Dalvik runtime. The Android build tooling updates handle the Android requirements, but it is possible that other included libraries have additional dependency requirements including the use of introspection or invocation of Java methods from native code. Some libraries may not be able to be used until the multidex build tools are updated to allow you to specify classes that must be included in the primary dex file.

Optimizing Multidex Development Builds

A multidex configuration requires significantly increased build processing time because the build system must make complex decisions about what classes must be included in the primary DEX file and what classes can be included in secondary DEX files. This means that routine builds performed as part of the development process with multidex typically take longer and can potentially slow your development process.

In order to mitigate the typically longer build times for multidex output, you should create two variations on your build output using the Android plugin for Gradle productFlavors: a development flavor and a production flavor.

For the development flavor, set a minimum SDK version of 21. This setting generates multidex output much faster using the ART-supported format. For the release flavor, set a minimum SDK version which matches your actual minimum support level. This setting generates a multidex APK that is compatible with more devices, but takes longer to build.

The following build configuration sample demonstrates the how to set up these flavors in a Gradle build file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
android {
productFlavors {
// Define separate dev and prod product flavors.
dev {
// dev utilizes minSDKVersion = 21 to allow the Android gradle plugin
// to pre-dex each module and produce an APK that can be tested on
// Android Lollipop without time consuming dex merging processes.
minSdkVersion 21
}
prod {
// The actual minSdkVersion for the application.
minSdkVersion 14
}
}
...
buildTypes {
release {
runProguard true
proguardFiles getDefaultProguardFile('proguard-android.txt'),
'proguard-rules.pro'
}
}
}
dependencies {
compile 'com.android.support:multidex:1.0.0'
}

After you have completed this configuration change, you can use the devDebug variant of your app, which combines the attributes of the dev productFlavor and the debug buildType. Using this target creates a debug app with proguard disabled, multidex enabled, and minSdkVersion set to Android API level 21. These settings cause the Android gradle plugin to do the following:

Build each module of the application (including dependencies) as separate dex files. This is commonly referred to as pre-dexing.

Include each dex file in the APK without modification.

Most importantly, the module dex files will not be combined, and so the long-running calculation to determine the contents of the primary dex file is avoided.
These settings result in fast, incremental builds, because only the dex files of modified modules are recomputed and repackaged into the APK file. The APK that results from these builds can be used to test on Android 5.0 devices only. However, by implementing the configuration as a flavor, you preserve the ability to perform normal builds with the release-appropriate minimum SDK level and proguard settings.

You can also build the other variants, including a prodDebug variant build, which takes longer to build, but can be used for testing outside of development. Within the configuration shown, the prodRelease variant would be the final testing and release version. If you are executing gradle tasks from the command line, you can use standard commands with DevDebug appended to the end (such as ./gradlew installDevDebug). For more information about using flavors with Gradle tasks, see the Gradle Plugin User Guide.

Tip: You can also provide a custom manifest, or a custom application class for each flavor, allowing you to use the support library MultiDexApplication class, or calling MultiDex.install() only for the variants that need it.

Using Build Variants in Android Studio

Build variants can be very useful for managing the build process when using multidex. Android Studio allows you to select these build variants in the user interface.

To have Android Studio build the “devDebug” variant of your app:

Note: The option to open this window is only available after you have successfully synchronized Android Studio with your Gradle build file using the Tools > Android > Sync Project with Gradle Files command.

Testing Multidex Apps

When writing instrumentation tests for multidex apps, no additional configuration is required. AndroidJUnitRunner supports multidex out of the box, as long as you use MultiDexApplication or override the attachBaseContext() method in your custom Application object and call MultiDex.install(this) to enable multidex.

Alternatively, you can override the onCreate() method in AndroidJUnitRunner:

1
2
3
4
5
public void onCreate(Bundle arguments) {
MultiDex.install(getTargetContext());
super.onCreate(arguments);
...
}

Note: Use of multidex for creating a test APK is not currently supported.

extends reading:

Nuwa

RocooFix

The Process of adding for View and Windows of Android

Introduce

For a deep understanding for mechanism of view, it’s necessary to introducing the process of addView. However, as View, Windows is also a part showing to user. So the process of adding Window will introduced as well.

Adding View

The main part of adding process happens in the addView function.

Sour Code Insight For addView() in ViewGroup.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

/*
* /android/4.0.3/frameworks-base/core/java/android/view/ViewGroup.java
*/

if (child == null) {
throw new IllegalArgumentException("Cannot add a null child view to a ViewGroup");
}

// addViewInner() will call child.requestLayout() when setting the new LayoutParams
// therefore, we call requestLayout() on ourselves before, so that the child's request
// will be blocked at our level
requestLayout();
invalidate(true);
addViewInner(child, index, params, false);

Step 1.

go to the function
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

```java

if (mMeasureCache != null) mMeasureCache.clear();

if (mAttachInfo != null && mAttachInfo.mViewRequestingLayout == null) {
// Only trigger request-during-layout logic if this is the view requesting it,
// not the views in its parent hierarchy
ViewRootImpl viewRoot = getViewRootImpl();
if (viewRoot != null && viewRoot.isInLayout()) {
if (!viewRoot.requestLayoutDuringLayout(this)) {
return;
}
}
mAttachInfo.mViewRequestingLayout = this;
}

mPrivateFlags |= PFLAG_FORCE_LAYOUT;
mPrivateFlags |= PFLAG_INVALIDATED;

if (mParent != null && !mParent.isLayoutRequested()) {
mParent.requestLayout();
}
if (mAttachInfo != null && mAttachInfo.mViewRequestingLayout == this) {
mAttachInfo.mViewRequestingLayout = null;
}

Step1.1 why

and what is it ?
1
2
3
4
5

It is assigned value at ```measure()``` as follows:
```java
mMeasureCache.put(key, ((long) mMeasuredWidth) << 32 |
(long) mMeasuredHeight & 0xffffffffL); // suppress sign extension

what is

key
1
2
3
4
5
6
7
8
9
10

```widthMeasureSpec``` is int type, the result key is the mMeasureWidth+mMeasureHeight

Step1.2 ```mAttachInfo``` and ```mAttachInfo.mViewRequestingLayout``` is what ?
```mAttachInfo``` inherit from parent, source is viewroot.


Step1.3 ParentView requestLayout()

Step2 ```invalidate(true)``` is for setting the area and letting child ```invalidate()

Step3 The main content is in addViewInner(), code as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
if (child.getParent() != null) {
throw new IllegalStateException("The specified child already has a parent. " +
"You must call removeView() on the child's parent first.");
}

if (!checkLayoutParams(params)) {
params = generateLayoutParams(params);
}

if (preventRequestLayout) {
child.mLayoutParams = params;
} else {
child.setLayoutParams(params);
}

if (index < 0) {
index = mChildrenCount;
}

addInArray(child, index);

// tell our children
if (preventRequestLayout) {
child.assignParent(this);
} else {
child.mParent = this;
}

if (child.hasFocus()) {
requestChildFocus(child, child.findFocus());
}

AttachInfo ai = mAttachInfo;
if (ai != null) {
boolean lastKeepOn = ai.mKeepScreenOn;
ai.mKeepScreenOn = false;
child.dispatchAttachedToWindow(mAttachInfo, (mViewFlags&VISIBILITY_MASK));
if (ai.mKeepScreenOn) {
needGlobalAttributesUpdate(true);
}
ai.mKeepScreenOn = lastKeepOn;
}

if (mOnHierarchyChangeListener != null) {
mOnHierarchyChangeListener.onChildViewAdded(this, child);
}

if ((child.mViewFlags & DUPLICATE_PARENT_STATE) == DUPLICATE_PARENT_STATE) {
mGroupFlags |= FLAG_NOTIFY_CHILDREN_ON_DRAWABLE_STATE_CHANGE;
}

add view in viewgroup’s array. The other operaions are for

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

In the end, the View is added in the Group and showed.

### Add Window
Let Dialog as an example, Dialog construction function as follows:
```java

mContext = new ContextThemeWrapper(
context, theme == 0 ? com.android.internal.R.style.Theme_Dialog : theme);
mWindowManager = (WindowManager)context.getSystemService("window");
Window w = PolicyManager.makeNewWindow(mContext);
mWindow = w;
w.setCallback(this);
w.setWindowManager(mWindowManager, null, null);
w.setGravity(Gravity.CENTER);
mUiThread = Thread.currentThread();
mListenersHandler = new ListenersHandler(this);

Dialog is attached on ContextThemeWrapper, and let WindowManger know the Window, get the Thread.

Dialog is different from PopupWindow. That will introduced.

Design of High Concurrency Architecture of Live Platform

Rise and the status quo

Everyday life with the mobile phone to watch the video more and more, more and more time, look at the content is more and more types. Including the recent start from the United States in March after the fire, the domestic mobile video is also the social class of fire. This is what we now focus on the cut of a vertical category, the vertical class why now fire? We summarize down part of the reason is because of its highly entertaining, low latency, and the anchor may have a strong interaction, so more and more people are concerned. At present, there are at least a few domestic on-line, there are dozens of contacts are preparing to go online, which will always be a few fire up.

[image]
This is what we now come into contact with these industries do some classification, there is a comprehensive category, that is, user-generated content to entertainment-oriented, user-generated content is not strong division. There will be some suggestions, but not mandatory. There are some relatively strong industry, such as finance, sports, education, these categories.

There is the most extensive show field class, the profit model is the most clear, its volume is particularly large, the anchor depends on this strong business. This part of the core needs of manufacturers is clear, most of them from their business needs, they are good at doing is installed capacity, to maintain high daily living, to maintain the stickiness between the anchor and the user, and then how to do commercial, commercial Many people have a headache, these things enough for them, and they are good at.

But in this part of the multimedia, the threshold is high, before doing on-demand services, like two years ago when I was doing the media cloud, were on-demand business. Do the back, I think the on-demand business is not as difficult as you think, you want to have a stable storage, find a Kaopu CDN, and then find a player can be used to do it, what is this Hard to do? You can find a cloud service company, you can find outsourcing, or you can recruit a person to do. But now found to move, especially in March after the fire live fire up, the threshold suddenly high. Because the content generation side into the mobile side, I will elaborate later.

Core requirements

Let me talk about why the core needs arise. We are watching mobile live, if someone concerned, you will find those anchors often ask the word is “card is not card, is not another card, and I want to crazy, and stuck.” When you see on-demand, watching a short video, no one would ask, right? Will not say you see a short video, it cards, this problem is only recently appeared. Real customers have returned to this problem, indicating that the threshold of streaming media has become high, so their demand for streaming media is growing. Then we look at what these needs are.

  1. the first content side is the push-flow side, and now the mainstream IOS, Andrews, IOS is relatively simple, that is a few models, we all fit well. But Andrews fragmentation is very serious, a lot of energy need to do on the adaptation of Andrews, and soft power consumption is generally very high, the phone will use a hot, are worried about will not explode. User experience is different in the network, upload the video may be card, there may be incoherent, reported a wide range of errors, this is a developer as he can not go to fit. Said the white side of the demand from the user is the push-flow side can not card, picture quality is better, not too hot, this is our contact with the real customers to mention the problem is that we from a bit biased technology point of view extracted behind it Corresponding to what is.

  2. and then the distribution network. In fact, the distribution network hiding behind a place, the user actually invisible. The real needs of the distribution network to mention the user can not mention, so this part of the basic requirements will be presented to the player, the demand is not to mention cards, not Huaping, the first screen must be fast, that is necessary to see, Too much. In fact, many of these are the source distribution network and the relationship between the station, but users do not see this demand will be connected with the back of the player together.

On this demand we do some abstraction is the user’s reachability is better, our CDN nodes in the whole region, all operators have coverage, including education network. There are many people, like those small operators will ignore the education network, we have encountered such an example, the education network is really bad, because the node is not enough, this is not a difficult point, just a pit, you can do To. Most of the low-latency from the end of the operation, the server as long as the cache is good to ensure that this data is coherent. If you want to lose data, the key frame to retain good, throw GOP middle of those PB frames, mainly in the side will receive.

The first screen time is to open the user point of view, before those open source architecture is rtmp server, it can not do a little open to see, and now some open source of domestic resources is also better written, you can see. We are their own development, it also spent some work, can save the key frame before the information, the user can open a little to see, this is very details of the things. If this is not good, it will be black, green screen, or half-day can not see the image.

  1. in the player here is when we pick up the business, the most users encounter complaints, because all the problems are reflected in the time of the watch, all the players had to carry the player Ray. This demand is not card, can not delay too high. If the delay is high, to recover, chase when the sound can not be changed, it is best to pursue the strategy can control their own, this is the user really put forward the demand.

For us, to meet these needs, we need to do a good job of multi-resolution adaptation, to ensure good fluency, and ensure that we catch up with the strategy will not be any exception. So the three end a lot of mutual coupling, such as the flow and distribution together to protect the user’s fluency and quality, distribution and players together to ensure good low-delay and smooth playback. All of these needs in the common point is not card, behind us in the design of the program, but also focus on how to do not consider the card.
solution
[image]
This is our side of the system architecture diagram. The bottom is relying on Jinshan cloud services, because we have a very good platform to provide our computing resources, providing storage, providing a lot of self-built nodes, of course, not enough, we are a fusion CDN, and then provide The ability of data analysis. We rely on it to do this layer of orange, is our own core, streaming media, and then around the core we do look back on-demand, online transcoding, authentication, content review.

Why do you want to do on-demand? Because it is not a short video recording and broadcasting of the project, but a live broadcast on the decision it will not be very high, the content will not be many hot spots less. If you do not look back, the user is difficult to maintain its daily life, it is difficult to maintain user viscosity, so the user will be asked to look back.

Why do online transcoding? Push-flow end actually do a lot of better quality to try to pass up the work, cast a lot of manpower to do. Pass up, the watch is also moving, it may not be seen. If he can not see how to do? We need to turn online, online transcoding actually bear more and more important things.

Authentication, the user does not want to be Daochang, especially when the push flow, if I do not authentication, no one can come to push, push a law lun power how to do? This is a must. Content review, and now we have no way to help him do automatic review, technology is not enough. Now do the screenshots, according to the time specified by the user regularly screenshots, so, the user can ask some outsourcing is not sensitive to content, is not to be off the assembly line, this is now the three or four seconds delay to live Said it was very important. You can not do it, chances are you do not go on the policy factors.

Part of the data analysis is based on Jinshan has been part of our own to do, because we delay, timeliness requirements are higher. Customers will often make a sudden appearance of a special anchor anchor card, ask why, if the way that as before, an hour to generate reports, and then experience the map and tell him why the card, the customer can not have this patience.

We are now able to do the basic 5-second interval before the issue of positioning, including the positioning of the data collected from the source station curve. There are from the end, if the end users to allow, then push the flow and pull the flow end we will have reported data, a few curves a fit, we know where the problem lies. So now more than RD can check the problem, many of our pre-sale are in the bear to help users out of the map of the work.
[image]

This is a business-specific flow chart, the flow chart is not anything special, but the general trend of streaming media data, a variety of requests. But there are some pit I can talk with you focus on, first of all look at the launch process, it is certainly by the application to their own server to request a flow of the address, the flow of the address he used to our streaming media Server push, and then we give it authentication.

After authentication, it can be selected in the parameter is not to record. If you need video capture, or the need for HLS distribution, we can help him do, done after the deposit to our storage, which is later mentioned, we do business between the isolation, different priorities , This back-end multimedia processing as much as possible will depend on other services, and then is the normal end of the process.

This is a problem encountered in practice, and now do streaming media, users push the flow, he wanted to know the flow did not end, the general Internet companies do cloud services are how to do? Are to the callback, if the push flow is over, I call back the business side, so that business side know I’m over, you can do your logic.

But the actual operation we encountered the problem is the business side of the server is not so reliable, we may have a particularly long time in the past, there are delays, lost, or their service stability we can not confirm that this is actually a coupling between the two sides . And its servers, because we are to tune, its authentication function is no way to do very complicated, his own server security vulnerabilities. If someone comes to attack him, his entire business process is in a state of chaos.

In the test after several customers, we changed to another way, it is generally accepted, that is, by the APP and their own Server heartbeat, if the APP network is not unusual, then it must end its Server knew. If abnormal heart rupture, he will determine the end of the. And we will ensure that the source side of the service here, you have no data for 5 seconds is certainly the end, and we will kick you out of the flow, so users can achieve the business status is stable, our streaming media Services are stable, and coupling will be relatively small.

This is a pit we actually encounter, this is not difficult, but now the general cloud service providers are also used in the way back out, so I mention that there is also an alternative way, the better .

Playback process, the player will first request to play his own service address, and then to our pull flow, can be authentication can not authentication, depending on its business form. If the pull flow failure, we have some customization operation, he used RTMP to pull the flow, we will tell him what is wrong, including authentication failure, authentication parameters error, or the flow problems, we will in the state Tell him. This is the needs of users before that, need to know where to play a problem, so we try to state the code are particularly detailed return to the user. Including our original station also has a query interface, if he needs the kind of unified query can also check.

  1. Push-flow side to achieve the program

This is the push-flow side of the realization of the design principle of the flow end is down adaptive, push the flow of anyone can do, a lot of open source. But why do some good, some do well? Is to look at doing good or bad adaptation.

Sum up there are three adaptive, one is the frame rate and bit rate adaptive, which is everyone can think of. I push the flow, if the network card, I drop the frame rate or drop a little bit rate, this thing well, the normal flow to push up, not Caton. This picture is drawn to the network, we made a QS module, in addition to our team to do engineering people, there will be four or five doctors do the specialized algorithms.

Here there are some, we adapt to the rate of time is directly back to the encoder, so that the encoder dynamically adjust their bit rate, as far as possible to ensure that the quality of non-destructive, the video rate dropped out of the video smoothing . Frame rate control is relatively simple, and when we found that the network Carton, and we will feed back to the frame rate control module.

In the collection time to do some of the discarded operation, the purpose is to send the bandwidth down. We do this is based on TCP, certainly not the effect of UDP, UDP is the next step of our attempt, has not yet begun. Because UDP also involves some of the source station structure reconstruction, we have not had time to do, and now the effect is actually based on TCP good. In addition to this simple behind the adaptive, we also added an algorithm class, that effect will be more obvious.

The second adaptation is hard and soft self-adaptation, the good understanding of the advantages of hardware coding is the phone is not hot, a lot of shortcomings, with MediaRecorder, audio and video is difficult to synchronize with MediaCodec, compatibility issues, Now is not very popular. With a soft code words rate is low, good quality, in addition to CPU particularly hot, others are advantages.

How can these two together? We are now doing some strategic things, this is a manual labor, on our own side to configure the black and white list, there are some Top50 to Top100 of the high-end models we use to test, performance is no problem, we are on Soft. Because just heard the soft knitting is the advantage, in addition to hot.

Popular models have some low-end, soft-series can not stand on the hard-coded. Because the hard series is a manual work, so the model is certainly limited, no one can guarantee that all platforms, all models fit hard, so some of the following non-popular models, we have time to adapt to the soft . Do so down, the basic can reach more than 99% adaptation rate. In some large users there have been verified this data.

The third is adaptive and algorithmic adaptive. We are the first company to be able to commercialize h.265. Now all of the h.265, I do not know if you do not know h.265, has not heard of h.265 can be commercialized in the Web-free plug-in player? We now do in the Celeron machine can broadcast 30FPS 720P video, the browser does not need to install any plug-ins, this is our continuous optimization results. Of course, this is not suitable for mobile scenarios, we are in another scene when used.

In the mobile side we do the IOS phone 720P encoding, so 15FPS, and then CPU will not play full, may be between 50% to 70%. Before the data is played a core. This is because we have a lot of algorithms before the team, the beginning is to do technology licensing, and later wanted to landing on some products, mobile live h.265 is a very good landing scene, why?

Push the end of the task is to push up the quality of better, limited network, how can I push up a better picture quality? H.265 can save 30% of bandwidth relative to h.264. 30% of the concept is in the video-on-demand applications can save some money in the initial application simply do not care, because the anchor is more expensive, who cares about 30% of the bandwidth.

But in the mobile flow is not the same, and 30% is from 480P to 720P changes, that is, you can only push up the 480P picture quality, after h.265 this code can be pushed up 720P, the demand is Network is good enough, CPU good enough, why do not I push a better video up? This is a scene of h.265, I use the advantages of the algorithm, as long as you can make me use 265 to adapt to the machine, I can push up the better picture quality.

  1. distribution network - multi-cluster source station design
    [image]
    Distribution network is hiding in a place far away, we were the design of the three principles is high concurrency, high availability, system decoupling, the first two are virtual, as long as the system will want to do how high concurrency, how high availability , How to expand the most easily horizontal.

We do a multi-source station, as opposed to many companies doing single-source approach, we are in order to allow users to better reach our network. In each cluster, each city has done a multi-source station, now is not only a few points in Hong Kong and the United States, we also made a point. So how can do horizontal expansion and data and business center of isolation, is spent some mind. This program is not difficult to do with a number of storage synchronization is also done.

High availability, like DNS this, to ensure that a single point of service, high availability can do. How do decoupling the system, because the traditional CDN is responsible for streaming media distribution, we compared to its advantage is that we do in addition to streaming media distribution, but also do a lot of multimedia features, such as screenshots, video, transcoding, and more Resolution, adaptation of these things, these things will affect the stability of the system. How can we achieve a real decoupling, to ensure system stability is a lot of work under.

Some open source services, but also do multi-resolution adaptation, but all of its transcoding scheduling is by its streaming media services to transfer from the. Including transcoding the life cycle is also a streaming media service to control, they are deployed at the same level. In fact, this is a big problem, multi-resolution adaptation and the original picture of the push and distribution is not a priority service. When doing system grading, they should be separated, should be separated in different systems to do.
[image]
Multi-cluster source station is just mentioned, as far as possible with a three-room, or BPG room, and then in all cities north and south are distributed, as close to the reach of users, allowing users to stream more easily. At the same time we have deployed in each source station Jinshan cloud storage, KS3.

Department of storage is also intended to be able to better ensure the user screenshots and video file reliability, save down we do not care, to the KS3, of course, KS3 multimedia service is our maintenance. Do transcoding screenshots, or a combination of a series of operations to turn resolution, is done by another system, we put these multimedia services and services to do the source decoupling.

Online transcoding is a very CPU-intensive business. A 24-core high-end configuration of the machine is now, if I want to turn some good quality video, each video to three resolution, so I turn it to play eight full, which is very CPU consumption. If I turn no one to see, the CPU in that consumption, and this is not suitable and the source station mixed with a service.

Transcoding to and data from the close, in the source station cluster in the same room, we will apply for some transcoding resources, and then unified by the core room scheduling. We separate the scheduling and specific functions, according to your where to push the flow, we are close to where the transcoding. Transcoding also adds some real-time transcoding strategy.

Why do online transcoding? Because the push-flow side is the best effort to do the best picture quality, the highest bandwidth pass up. But the player does not necessarily see, so we need to turn it out, and h.265 although good, but the biggest problem is that there is no way to move the browser on the broadcast. Share out must be h.264, or else to WeChat or QQ browser, you can not see.

If I use a very deep technical means to push up your h.265 video, and picture quality is very good. But not on our side to not see, you want to share, we can help you turn out a h.264 to share. Transcoding is a high CPU occupancy scene, if I do not make a reasonable allocation of CPU, then my machine resources will be played soon.

We do two strategies, one is a limited machine reasonable scheduling. Our transcoding system is distributed, pipelined, similar to Storm that kind of system, but we do more suitable for transcoding. After the task came, our first process is not to turn, but analysis, to see what you want to turn into what you are quality, probably what CPU.

If you use a lot of CPU, I would think this is a very difficult to be re-scheduling services, such as you come in a four-core transcoding services, and then come a bunch of a core, and certainly a nuclear comparison Good scheduling, the machine resources, and I can give you scheduling another machine, or another machine would have some spare, and now the remaining three cores, I can not take four cores, I can only take a nuclear , So we will be priority, priority allocation of high CPU occupied task, and then is the task of low CPU occupancy, in the streaming system, will be in the pre-analysis of different tasks thrown into a different priority queue, this Priority queue to undertake a different resolution to go to the video function.

But if you need to downgrade disaster recovery in the back, then also rely on this priority queue to solve, each user will have quotas. I just said 24 and 24, in fact, for a cloud service company, this amount is too small. Baidu as I do in the media before the cloud when the amount of transcoding every day is 300,000, I think a business bigger, the day 30 million of the amount of transcoding is normal.

This is a test of a project, how can we do as much as possible to tie the CPU, because the peak trough is obvious. Like h.265 this scene, we are doing a real-time transcoding, someone to share immediately turn to you, so that once the user starts to share, to achieve the role of seconds to open. But you do not look, we will have a strategy to help you stop as soon as possible. Because this share out of the video is not a high concurrent business, some people see we give him turn is a more reasonable scene.

For those low resolution is now gradually on the gray, not to say that all your distribution, you initiated, I give you turn, we will gradually determine that some people see we turn, try to save system resources. The back will also consider storage resources, because each room will have storage, storage is completely without the CPU, it is to ensure that the disk and IO, and we are not completely re-use of resources, can be mixed Department, and later we will consider Step by step the Department of mixing.

CDN distribution, distribution links, in fact, there are many things that need to play to meet, for example, now push the flow in order to ensure good quality, I will increase the B frame, increase the GOP, so encoded video quality will become better, I increased the GOP, then my delay will be large, the user must start from the last key frame to see, so he may see is 5 seconds or 10 seconds before the video, the social class of mobile broadcast is Unbearable. Since there is such a demand, the source station needs to be saved before the good. But how can make the delay was digested, it depends on the player side.

  1. the player-side program

This is the realization of the broadcast side of the block diagram, the middle draw a little less. This is a traditional player block diagram, does not reflect the core of our technical points, the data received from the network, after the Demux after RTMP, we have a module, the module will determine whether the current video needs to be discarded , This principle is also related to our cache, we cache with two seconds, if more than two seconds, or more than one of the other threshold, we will open the discarded mode.

This discard has a variety of strategies, some directly lose frames, some fast forward. If you do the player will know that the traditional video to catch up after the video decoding is generally done to catch up. Decoding will mean CPU consumption, especially now if I want to broadcast 720 of the video, just barely real-time decoding, basically, there is no room for catching up.

So we have done some optimization algorithm, we get this video will go to judge when it is not a can be lost, if it can be lost in the decoding before we lost, but this will throw a problem, Because the decoder will be internal discontinuous, once the decoder internal discontinuity, it may produce a black screen, so even if we want to lose, but also in the decoder inside to do some custom development, or to lose the video into the pass, Let it lose its own, it is not to solution, so that you can achieve more quickly lost the video to catch up with the actual progress of the current anchor.

In this case, if we network is good, do not worry about the future jitter, we can do from the push to watch is 2 seconds delay, but generally we are controlled in 4 seconds, is to prevent jitter.

Just said is lost this logic, if you want to fast-forward, similar to the kind of betta, in a little into the start screen is soon past, but there is no audio, we are now doing the way audio, video in the fast Progressive, audio is fast-forward, so the sound will change tone, because the sampling rate has changed. Before doing the end of the experience, have done this kind of variable speed invariant transfer algorithm, a lot of open source, change the fact that the effect can be good, this algorithm as long as the reverse optimization, put in, the audio can guarantee the same tone .

Log collection, may not all log developers are willing to accept, but some developers are forcing us to do, because they need this data to locate the problem, as I said, people often ask, Is not another card, the question was asked more, we all want to know why the card, the end of the log is not collected up there is no way to locate the problem. We now have a log collection strategy in the user’s consent, it may periodically hundreds of labeled into a ZIP package up, this data is shared with our users.

In fact, we have waded the pit, the beginning we do based on VLC, because the beginning we do the media cloud is doing on-demand business, VLC is a very good framework, but with VLC to catch up with this logic can put people dead, Particularly hard to change. Because it is coupled inside a layer of heavy, even to the end of the change over, the sound card situation. Later, or with a more simple framework, to write their own upper control. So the mobile end of the live scene and on-demand scenes or there is a big difference, which is why recently there have been a lot of sudden voice in the video business on the threshold.

This page I have already mentioned one after another, that is how we locate the problem, how to meet the player’s compatibility, there are catch-up experience, contract, we will pay attention to the size of APP. Because we are a collection and playback are provided by our end-to-end program, there are many libraries can be reused, if we use our words, we can do some of these libraries to merge, to maximize the savings we provide compressed the size of.

User stories

This is our actual access to some of the user’s case, some of the main push hard, and some of the main push of the soft, many of the products on some of the details. We are also through these cases, analysis of which products are suitable for social class live, have seen some of the user base and concern the beginning of the relationship between the fire hope, but also his needs to mention the most, but also the most On the will of h.265. Once you have this relationship, really tried the water test and error this stage, it will be very concerned about the quality of the content you produce, because we are end to end service, so very suitable for access to such users.