Getting Start Use TensorFlow Install Part

Install on Ubuntu

Install Ubuntu

download iso of Ubntu 14.04, and install it on virtualbox

install TensorFlow

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

$ sudo apt-get install python-pip python-dev

$ sudo easy_install pip
$ sudo easy_install --upgrade six

# Ubuntu/Linux 64-bit, CPU only, Python 2.7
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc1-cp27-none-linux_x86_64.whl

# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc1-cp27-none-linux_x86_64.whl

# Mac OS X, CPU only, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc1-py2-none-any.whl

# Mac OS X, GPU enabled, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc1-py2-none-any.whl

# Ubuntu/Linux 64-bit, CPU only, Python 3.4
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc1-cp34-cp34m-linux_x86_64.whl

# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc1-cp34-cp34m-linux_x86_64.whl

# Ubuntu/Linux 64-bit, CPU only, Python 3.5
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc1-cp35-cp35m-linux_x86_64.whl

# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc1-cp35-cp35m-linux_x86_64.whl

# Mac OS X, CPU only, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc1-py3-none-any.whl

# Mac OS X, GPU enabled, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc1-py3-none-any.whl

title: Getting Start Use TensorFlow Install Part
date: 2015-12-10

tags: [deeplearn, platform, data analysis, ml, ]

Install on Ubuntu

Install Ubuntu

download iso of Ubntu 14.04, and install it on virtualbox

install TensorFlow

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

$ sudo apt-get install python-pip python-dev

$ sudo easy_install pip
$ sudo easy_install --upgrade six

# Ubuntu/Linux 64-bit, CPU only, Python 2.7
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc1-cp27-none-linux_x86_64.whl

# Ubuntu/Linux 64-bit, GPU enabled, Python 2.7
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc1-cp27-none-linux_x86_64.whl

# Mac OS X, CPU only, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc1-py2-none-any.whl

# Mac OS X, GPU enabled, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc1-py2-none-any.whl

# Ubuntu/Linux 64-bit, CPU only, Python 3.4
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc1-cp34-cp34m-linux_x86_64.whl

# Ubuntu/Linux 64-bit, GPU enabled, Python 3.4
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc1-cp34-cp34m-linux_x86_64.whl

# Ubuntu/Linux 64-bit, CPU only, Python 3.5
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0rc1-cp35-cp35m-linux_x86_64.whl

# Ubuntu/Linux 64-bit, GPU enabled, Python 3.5
# Requires CUDA toolkit 7.5 and CuDNN v5. For other versions, see "Install from sources" below.
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.11.0rc1-cp35-cp35m-linux_x86_64.whl

# Mac OS X, CPU only, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.11.0rc1-py3-none-any.whl

# Mac OS X, GPU enabled, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.11.0rc1-py3-none-any.whl

you also can dowload the whl then install it. This suits for unstable network.

the instruction is

1
$ sudo pip install --upgrade $TF_BINARY_URL

run Hello World

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

import tensorflow as tf

# Create a Constant op that produces a 1x2 matrix. The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])

# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])

# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)

ref

tensorflow-learning-notes

Maven build spring boot multi-module project

1.idea Create a maven project

1-1: Delete the src, target directory, leaving only pom.xml

1-2: root directory pom.xml can be sub-module inheritance, the project is just demo, not consider too many performance issues, so many dependencies

Are written in the root level pom.xml, sub-module can be used only to inherit.

1-3: The root level pom.xml file is in Appendix 1

1-4: depend on the module mybatis spring-boot related modules

2. Create a sub-module (module)

2-1: file> new> module Enter the model

2-2: file> new> module Enter dao

2-3: file> new> module Enter service

2-4: file> new> module Enter webapi

3. Modify the submodule pom.xml configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<? Xml version = "1.0" encoding = "UTF-8"?>
<Project xmlns = "http://maven.apache.org/POM/4.0.0"
         Xmlns: xsi = "http://www.w3.org/2001/XMLSchema-instance"
         Xsi: schemaLocation = "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <Parent>
        <ArtifactId> parent </ artifactId>
        <GroupId> com.luyh.projectv1 </ groupId>
        <Version> 1.0-SNAPSHOT </ version>
        <RelativePath> ../ pom.xml </ relativePath>
    </ Parent>
    <ModelVersion> 4.0.0 </ modelVersion>

    <ArtifactId> projectv1-model </ artifactId>
</ Project>

Note: ../pom.xml This section must be added to inherit the parent module

At this point, the project’s infrastructure structure is completed, the next line of code can be, oh oh wait, I first introduce the work of the various sub-module responsibilities

4. Sub-module in the project as the ‘job responsibilities’

This module holds all entity classes

Dao This module holds the concrete realization of data interaction for service call

Service This module holds the business code implementation for API level invocation

Webapi this module can not appear in the project, in order to write demo webapi layer will come in

5.model layer entity class written

Create the package name com.luyh.projectv1.model

Construction entity class Member.java specific code Please clone my git, git address in the bottom

6.dao layer database operation layer

The establishment of com.luyh.projectv1.dao.config, the package is only 2 to spring boot automatically configured to configure

the configuration of java class

MemberMapper.java specific content to see the code

Create a MemberMapper.xml under resources / mybatis

Create IMember.java

Create Member.java Imember interface

Create the resources / application.properties file to configure the database connection

7. Service write business logic

Create the com.luyh.projectv1.service package

Create the IMemberService.java interface

Create a MemberService.java implementation class

The MemberService.java class automatically injects DaoMember and calls its methods to get the data

8. webapi Write webapi to get json data

Build Application.java Start the application

Create com.luyh.projectv1.webapi.controller.MemberController.java write a rest style Controller

start up

9.sql file Please import mysql data sql file

Here is the project address, click to download

Appendix 1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75

<? Xml version = "1.0" encoding = "UTF-8"?>
<Project xmlns = "http://maven.apache.org/POM/4.0.0"
         Xmlns: xsi = "http://www.w3.org/2001/XMLSchema-instance"
         Xsi: schemaLocation = "http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <ModelVersion> 4.0.0 </ modelVersion>

    <GroupId> com.luyh.projectv1 </ groupId>
    <ArtifactId> parent </ artifactId>
    <Version> 1.0-SNAPSHOT </ version>
    <Packaging> pom </ packaging>
    <Parent>
        <GroupId> org.springframework.boot </ groupId>
        <ArtifactId> spring-boot-starter-parent </ artifactId>
        <Version> 1.3.3.RELEASE </ version>
    </ Parent>
    <Modules>

        <Module> model </ module>
        <Module> dao </ module>
        <Module> service </ module>
        <Module> webapi </ module>
    </ Modules>

    <! - Declare Dependencies ->
    <Dependencies>
        <Dependency>
            <GroupId> org.springframework.boot </ groupId>
            <ArtifactId> spring-boot-starter-web </ artifactId>
        </ Dependency>

        <Dependency>
            <GroupId> org.springframework.boot </ groupId>
            <ArtifactId> spring-boot-starter-jdbc </ artifactId>
        </ Dependency>

        <Dependency>
            <GroupId> org.mybatis </ groupId>
            <ArtifactId> mybatis-spring </ artifactId>
            <Version> 1.2.2 </ version>
        </ Dependency>
        <Dependency>
            <GroupId> org.mybatis </ groupId>
            <ArtifactId> mybatis </ artifactId>
            <Version> 3.2.8 </ version>
        </ Dependency>

        <Dependency>
            <GroupId> org.apache.tomcat </ groupId>
            <ArtifactId> tomcat-jdbc </ artifactId>
        </ Dependency>

        <Dependency>
            <GroupId> mysql </ groupId>
            <ArtifactId> mysql-connector-java </ artifactId>
        </ Dependency>
    </ Dependencies>

    <! - Set maven repository ->

    <Repositories>
        <Repository>
            <Id> spring-releases </ id>
            <Url> https://repo.spring.io/libs-release </ url>
        </ Repository>
    </ Repositories>
    <PluginRepositories>
        <PluginRepository>
            <Id> spring-releases </ id>
            <Url> https://repo.spring.io/libs-release </ url>
        </ PluginRepository>
    </ PluginRepositories>


</ Project>

ref

spring-boot
example code

Getting Start Using TensorFlowBoard Part I

Instruction

codes as follows

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
# Copyright 2015 Google Inc. All Rights Reserved.  
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

"""A simple MNIST classifier which displays summaries in TensorBoard.

This is an unimpressive MNIST model, but it is a good example of using
tf.name_scope to make a graph legible in the TensorBoard graph explorer, and of
naming summary tags so that they are grouped meaningfully in TensorBoard.

It demonstrates the functionality of every TensorBoard dashboard.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf

from tensorflow.examples.tutorials.mnist import input_data


flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_boolean('fake_data', False, 'If true, uses fake data '
'for unit testing.')
flags.DEFINE_integer('max_steps', 1000, 'Number of steps to run trainer.')
flags.DEFINE_float('learning_rate', 0.001, 'Initial learning rate.')
flags.DEFINE_float('dropout', 0.9, 'Keep probability for training dropout.')
flags.DEFINE_string('data_dir', '/tmp/data', 'Directory for storing data')
flags.DEFINE_string('summaries_dir', '/tmp/mnist_logs', 'Summaries directory')



def tb2():
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
sess.run(hello)
train_writer = tf.train.SummaryWriter(FLAGS.summaries_dir+ '/train',
sess.graph)
a = tf.constant(10)
b = tf.constant(32)
tf.scalar_summary('accuracy', b)
tf.histogram_summary( 'sss/activations', b)
sess.run(a+b)
merged = tf.merge_all_summaries()
summary_str = sess.run(merged)
train_writer.add_summary(summary_str, 1)

if __name__ == '__main__':
tb2()

ref

tensorfly.cn

Learning The Automatic Testing System of Nuomi

This paper introduces the automatic test scheme based on Baidu glutinous rice O2O system mobile technology framework, which is a breakthrough of the traditional mobile field test program, which consists of two parts: component automation testing and terminal monitoring and alarming.

background

Mobile App has evolved from the early Native architecture to the Hybrid framework, to the current component architecture. The development of the technology is constantly innovating, and the framework for testing automation is endless, including Appium (iOS and Android), Robotium, Calabash and EarlGrey IOS open source automation framework) and so on.

These tools have been instrumental in testing many products, but the development of testing techniques has lagged behind the changes in App development technology. The test automation framework always feels inadequate to meet the testing needs of existing products, such as components or React Native such architecture system products, the previously described those who test open source framework is somewhat inadequate.

To this end, after the road in the glutinous rice components change (component concept, please refer to the “movement of the structure of glutinous rice moving components of the road” is not described here), in the test automation technology has a new qualitative change above.

Component Automation Test Scheme

Early automation tools and the drawbacks of the program

The mobile App component team began to choose to use automated tools or frameworks to address the heavy task of manual testing, the lack of offline regression, and low test coverage. But gradually found that automated tools to some extent and can not be a good way to reduce the cost of manual testing, but distress in the ease of automation tools, increased learning costs, and run the success rate is not satisfactory. Some departments will automatically run the results of test cases as the last line before the line of defense, which we can think of the final data is true, whether the tester can really benefit from these so-called test tools.

From the current more popular test tools to do an analysis:
anaylsis1

Based on the above tools, many companies will establish a series of cloud testing technology framework, but also with a lot of automation to meet the special test items, including the provision of test real machine. But for the component-based architecture of the app, and the development of these technologies are being applied more widely, based on the above program can not meet the needs of the product function iteration.

Components of the identification of components - components loaded into the iOS and Android platform, the code above is the same, if the appium can meet the current test requirements, but the recognition component page control properties is the only flaw, and at the same time identified on both ends The attribute elements that can be used are completely different, increasing maintenance costs for testers and increasing instability for running automated use cases.

Long-distance access to the stability of the component entrance - specific components of the page often need to log on, select the necessary steps to reach the components needed to test the page, there is the test focus, but the lengthy test steps to increase the automation of the Stability, halfway may be interrupted. If you can skip some unnecessary steps directly into the entrance, so you can avoid such problems.

Cross-platform stability issues - cross-platform stability has been a lot of companies worried about a problem, how to ensure seamless interaction between the client and the server, the service api is a long time to call and run can not request the failure, Are the shortcomings of existing tools.
Learning and maintenance costs - automation platform or tools If the environment configuration, the use of norms, such as the more complex requirements of the script, then the entry just for the test engineers have some learning costs.
Automatic Framework Solution for Glutinous Rice Components

In response to these questions, Baidu glutinous rice NA team to provide the following solutions:

1) The realization of the elements of recognition Google Chrome components through the structure of the page crawling, through the element attributes, or write support for xpath, jQuery selector expression. Elements of the positioning results for both iOS and Android at the same time.

analysis2

2) For the test requirements of the page, and page elements, if the page path is longer, more steps before the entrance to the page will increase the instability of the case, glutinous rice through the Schema to achieve direct access jump straight Theme of the test program, shorten the distance, improve stability, for other products through the Schema Jump page can also refer to such programs.

For example, the following test is concerned about the test function within the cinema, to achieve the entrance of the intermediate links will often lead to instability of the case, through the existing program can be avoided.
analysis3

3) The mobile phone installs the App to pull the cases of the server, disconnect the physical connection, so for the special instability or for special models of the needs, such as the latest iPhone series or the latest Android phone, the team can Run, fill the platform under the special circumstances of the resource gap.

4) a single case, to achieve a single business test requirements, the test suite is to achieve mobile phone test on the daily operation of the program, a number of single-scene test requirements into a unified complex test scenarios, component team, self-trigger daily run time segment.

23

5) In order to meet the fast iteration needs of the business, through the platform configuration, no special language skills, business test engineers only need to configure specific business scenarios to achieve script stable dialy operation, zero cost of learning and training investment.

To achieve the above features, through the client installation automation App, server configuration script and set the scene, you can use the automation to the extreme, the following is the Android mobile phone wireless automation framework:

234
Mobile terminal to install the automation App, HTTP request part of the back-end platform for the ability to support, including configuration information, script information, case operation report statistics on the platform to create, update and summary, to be prepared, the mobile terminal input or daily operation A designated test set ID number can be run in real time, resulting in the case of functional regression, including automation, as well as other special test performance indicators and other information.

UIAutomator framework based on the realization of the relevant packaging interface, the case of the various actions to achieve analysis, call the package to run the api page ui operation, the failure of the case steps to achieve the screenshot upload to determine the current failure of the specific circumstances. Because the support wireless connection, continues the job to trigger, therefore the parallel running ability also may satisfy.

Component-side monitoring of active inspection

Components from the App to the next release to the rendering, the whole process of each link will have all kinds of unexpected problems. On-line ANR, Crash, response time, etc. can be implanted through the App embedding program, or access to third-party sdk, the program can be used to monitor the real-time monitoring, Back-end platform to provide alarm information, real-time monitoring; but the complete rendering of the component page monitoring has been unable to achieve full coverage, such as back-end services back to the front normally, but the phone in a specific case the positioning function analysis problems led to the page has been The api initialization interface on the specific App end does not have the ready, the module does not carry on the judgment directly to call the internal method to cause the page to appear unusual and so on is the general offline log, or the burial point, the interface alarm platform can not discover. To this end the business side to ask the following questions:

How to load the page in the front-end components on the ui do abnormal monitoring;
How to locate abnormal after the need to provide the necessary auxiliary information to facilitate investigation;
How to deploy monitoring points, operating mechanism, how to achieve alarm strategy;
Component page load exception monitoring scheme

Glutinous rice team on the ui component page load anomalies do monitor, mainly through real machine real-time running components page list, polling cycle, page scanning, in addition to the components of the page element of the missing monitoring, also added js bomb abnormal block monitoring , Request the wrong monitoring, the specific program is as follows:

2345

1) monitoring engine - the whole core is injected Luban.js file, the file can be components of the h5 page scan analysis (as shown in the figure provides three capabilities). Js how to be injected into the measured app? We injected into the file through the platform js, monitor the app monitor js files are updated to determine the load, the use of open the measured app switch to achieve js dynamic injection, so that each component or h5 Pages can refer to the file. In order to achieve the function of monitoring engine.

2) Monitoring page - The main function of the monitoring page is to pull the monitoring point, monitoring strategy, configuration information and so on for the monitoring engine to analyze, such as when the component 1 page is opened, the monitoring engine automatically triggers, Page of the information, the page scan, the success or failure of the logo will be back to the monitoring page, listening to the page uploaded to the platform.

We can look at what the program can do:

When listening to the page to get to our manually add the page when the monitoring items, the monitoring engine will do the following things

Dom elements of the existence of the judge, the request failed to judge, js abnormal shell frame to judge

The following is the actual online alarm monitoring caught the problem:

1) Failure of monitoring of page elements (Dom element presence judgment):

s
2) for a component of the page js error box (js abnormal shell frame to determine):

2s
3) for a component of the page interface request specific link error (request failure to determine):
sd

UI level The above three types of decision logic has basically covered the entire component page load anomaly monitoring.

Of course, the app framework itself will also exist log monitoring real-time upload to the back-end platform, the platform to do such as component package download successful monitoring, component update success rate monitoring, component end-to-end response time monitoring, which are online users to monitor Log for analysis, for our QA team, the first time to reflect the online user experience, you need such an active inspection of the way, do a great job in a large number of warnings before the potential risks identified in advance to eliminate.

Component Location Scheme

Online problem locating and locating has always been a very painful problem for the testers, especially when the user is feeding back an occasional problem, by looking at the phone number, log, time of occurrence, and user feedback, to find out what the user is doing Problem, this approach is more common, it is difficult to locate specific issues related to the user phone status type, network, system, app version, component version, whether or not to fight against cheating and other possibilities. If you can get the user’s request and back-end data, as well as the performance of the page at that time, plus some of the information mentioned before the basic 80% can locate some problems for online active inspection is also true.

We will luban.js file interface request, and json return the results of records, while the side automatically on the page screenshots, alarm time, run the system and other information to the various components side investigation: The following screenshot you can see the request_params, http_reponses is The request that the alarm obtains at that time and the result that the response should return to the request.

Alarm also provides accurate alarm time, alarm monitoring elements, and screenshots, through which you can locate the cause of the problem, such as the above take-out of a shop should have a menu of “hot” menu, back-end data gaps, Possible reasons for the back-end redis problem, the need for errorno: 114013 back-end log troubleshooting, this information largely to the duty officer to provide a smaller range of strong information based on the rapid investigation of online problems of great help, damage.

Monitoring point deployment and alarm strategy

Components page layout is generally divided into banner, category, searchfilter, list pages, etc., for these features, we will dom element monitoring involves this information, as shown below:
sdf

For the alarm can be set to component id continuous alarm log appears after a certain number, through text messages and e-mail interface to remind people to solve as soon as possible.

At present, the entire NAQA team of the sticky rice through the actual real machine for active inspection, divided iOS and Android side, mainly the core components, for other components to special scenes or special models under the monitoring, you can configure the monitoring task, the entire platform Can provide independent services, can also provide monitoring real machine services, this approach can greatly meet the monitoring needs of various components to protect the entire glutinous rice App real user experience.

reference articles:

Learning NetWork Optimization for Mobile of Ctrip

Instroduction

Ctrip travel as a user to use the site around the world, its network optimization is the most important performance and user experience optimization, before we share Ctrip in the network and application architecture optimization exploration:

App network services, high reliability and low latency for the steady development of wireless business is essential in the past two years we have been continuously optimized App network service performance, to the end of Q2 this year, the basic completion of the App network service channel management and performance optimization phase Of the objectives, the author hereby summarize the lessons learned for future work to lay the foundation.

Ctrip App wireless network service architecture

In 2014, Ctrip developed Mobile Gateway for wireless services. There are two types: TCP Gateway and HTTP Gateway. TCP Gateway is designed for native service network services in App, based on the TCP protocol is designed on the application layer protocol, similar to the RPC mechanism. TCP Gateway combines the functions of the access layer and the service dynamic routing. The function of the access layer is based on the Netty implementation. It manages the client’s TCP long connection or short connection. The dynamic routing function is based on the NetWare open source Zuul implementation Which provides services such as routing, monitoring, anti-crawling, and user authentication on the TCP Gateway, which can provide dynamic routing, monitoring, resiliency, security, and more.
sd
After each TCP service request arrives at the TCP Gateway, it will be forwarded to the corresponding service cluster on the back end according to the service number in the packet header, thus decoupling the back-end service. The forwarding of TCP Gateway to the back-end business service cluster is realized by using the interface of HTTP protocol. The complete packet of a TCP service request is forwarded to the back-end business service cluster as Payload of HTTP request. After receiving the HTTP response, Its Payload complete return to the corresponding TCP connection.

HTTP Gateway for the App in the Hybrid and H5 Web site network services, the use of HTTP Restful interface to provide services, the logic is relatively simple, the core is the HTTP service dynamic forwarding function.
sdfsd
More details of the design of the Mobile Gateway can refer to Wang Xingchao in 2015 Shanghai QCon speech “Ctrip wireless Gateway”:

Http://www.infoq.com/cn/presentations/ctrip-wireless-gateway

Implementation of App Network Service Based on TCP Protocol

Bandwidth and delay are two factors that affect the performance of network services. Bandwidth is limited by the minimum bandwidth of the network channel. The delay is the round-trip transmission time of the network packet between the client and the server. The bandwidth and delay on different network types The difference is very large (see below).
sd
We want to achieve better performance of network services for the network bandwidth and delay of their two points, you can do just as much as possible to select the most appropriate network channel, the other can only be used on the network channel to optimize.

Traditional non-IM instant messaging class App usually use HTTP protocol to achieve network services (Restful API form), Ctrip use TCP protocol to achieve, does increase the cost of many development, such as the need to design application layer protocol, network management, Handling exceptions, etc., but the following reasons or let us finally choose to achieve App Web services based on TCP protocol:

Ctrip users sometimes in the network environment is very poor scenic areas need to be optimized for the weak network, a simple HTTP application layer protocol is difficult to achieve.
HTTP requests for the first time the need for DNS domain name resolution, we found that the domestic environment for Ctrip domain name failure rate of 2-3% (including domain name hijacking and resolution failure), seriously affect the user experience.
Although HTTP is based on TCP protocol to achieve the application layer protocol, the advantage is good encapsulation, client and server-side solution is mature. Disadvantages are small controllability, can not be customized for network connections, send requests and receive responses to optimize, even if the characteristics of HTTP such as to keep a long connection KeepAlive or pipeline Pipeline and so will be subject to the network environment Proxy or server implementation, It is difficult to fully play its role.

Based on the TCP protocol allows us to complete control of the entire network service life cycle of the various stages, including the following stages:

  • Gets the IP address of the server

  • establish connection

  • Serialize network request packets

  • Sends a network request

  • Accept a network response

  • Deserializes the network response message

Our network service channel management and optimization work is from these aspects.

TCP network service channel management and performance optimization

  1. Farewell DNS, direct use of IP addresses

If the first time to send HTTP-based network services, the first thing is DNS domain name resolution, DNS statistics we have only 98% of the success rate of resolution, the remaining 2% of the failure of resolution or DNS hijackers (Local DNS returned Non-source IP address), while DNS resolution in 3G time-consuming about 200 milliseconds, 4G also have 100 milliseconds or so, the delay is obvious. We are based on TCP connection, skip the DNS resolution stage, the use of built-in IP list of ways to connect to the network.

Ctrip App built a set of Server IP list, while each IP has a weight. Each time a new connection is established, the highest-weighted IP address is selected for the connection. When the App starts, all the weights of the IP list are the same. At this time, a set of Ping operations will be started to calculate the IP weight according to the delay time of the Ping value. The principle is that the IP address is smaller, Of the network transmission delay should also be relatively smaller. The industry also uses the HTTP DNS method to resolve DNS hijacking issues, while returning the most appropriate user network Server IP. However, the development and deployment of HTTP DNS requires no small development costs, we do not currently use.

The built-in Server IP list is also updated. Each App starts with a Mobile Config service (supports both TCP and HTTP network type services) to update the Server IP List and support Server IP List updates for different product lines. Therefore, the traditional DNS resolution can solve the function of multiple IDC diversion can also be resolved through this method.

  1. Socket connection optimization, reducing the connection time

As with the Keepalive feature in the HTTP protocol, the most direct way to reduce network service time is to maintain a long connection. Each TCP three-way handshake connection takes one RTT (round trip time) to complete, which means 100-300 millisecond delay; TCP protocol itself should deal with the network congestion Slow Start mechanism will also affect the new connection Of the transmission performance.

Ctrip App use of a long connection pool to use long connections, long connection pool to maintain a number of maintenance and service side of the TCP connection, each time the network service will be initiated from the long connection pool to obtain a free long connection to complete the network services And then put the TCP connection back into the long connection pool. We do not implement Pipeline and Multiplexing on a single TCP connection. Instead, we use the simplest FIFO mechanism for two reasons:

Simplify the service processing logic of Mobile Gateway, and reduce the development cost;
When multiple responses are sent back to the server, if a response packet is very large, using multiple long connections can speed up the response of the received service.
If a TCP connection is in use when a network service is initiated, or if a TCP long-connected network service fails, a TCP short connection is made to implement the network service. The difference between long and short connections is that the TCP connection is closed only after the service is complete.

Pipeline and Multiplexing is different, such as HTTP / 1.1 support Pipeline, the client can send multiple requests, but the server returns the response should be sent in accordance with the request to send the order to respond; SPDY and HTTP / 2 protocol Supports multiplexing, that is, support the out-of-order response of the return message, send the request and receive the response does not interfere with each other, so to avoid the HTTP / 1.1 Pipeline also can not completely solve the Head of line blocking problem.

References

Http://stackoverflow.com/questions/10362171/is-spdy-any-different-than-http-multiplexing-over-keep-alive-connections

Https://http2.github.io/faq/

The HTTP / 1.1 Pipeline feature mentioned in Reference 2 only partially resolves the Head of line blocking problem because a large or slow response can still block others behind it.

  1. Weak network and network jitter optimization

Ctrip App introduces the network quality parameters, through the network type and end to end Ping value calculation, according to different network quality to change the network service strategy:

Adjust the number of long connection pool: for example, in the 2G / 2.5G Egde network, will reduce the number of long connection pool is 1 (the operator will limit the number of single target IP TCP connection); WIFI network can increase the number of long connection pool And other mechanisms.
Dynamically adjust the TCP connection, write, read the timeout.
When the network type is switched, such as WIFI and mobile network, when the 4G / 3G switch to 2G, the IP address of the client will change. The TCP Socket that is already connected is destined to be invalid. (Each socket corresponds to a four-tuple: source IP, source Port, Destination IP, Destination Port), all free long connections are automatically closed, and the existing network service automatically retries according to the status.

  1. Data format optimization, reduce the amount of data transmission and serialization time

The smaller the amount of data transferred, the shorter the transmission time on the same TCP connection. Ctrip has used to design a set of data format, and later compared with Google ProtocolBuffer found that the specific data type packet size will be reduced by 20-30%, serialization and deserialization time can be reduced by 10-20%, so the current Core services are gradually migrating to ProtocolBuffer format. In addition, Facebook has shared their use FlatBuffer data format to improve performance of the practice, our analysis is not suitable for Ctrip’s business scenarios and therefore not used.

  1. Retry mechanism is introduced to improve the success rate of network services

By the TCP protocol retransmission mechanism to ensure reliable transmission mechanism of inspiration, we also introduced a retry mechanism in the application level to improve the success rate of network services. We found that more than 90% of the network service failure is due to network connection failure, then try again to have the opportunity to connect successfully and complete the service; At the same time we found that the network service life cycle mentioned in the establishment of a connection, serialization Network request packets, send network request failure of these three stages are automatically retry, because we can be sure that the request has not yet reached the server for processing, does not produce idempotent problems (if there is idempotent problem , There will be repeated orders, etc.). When a network service needs to retry, a short connection is used to compensate, rather than a long connection.

To achieve the above mechanism, Ctrip App network service success rate from the original 95.3% + upgrade to today’s 99.5% + (where the service success rate refers to the end-to-end service success rate, that is, the number of client acquisition service divided by the success Request the total amount calculated, and does not distinguish between the current network conditions), the effect is significant.

Other Network Services

Ctrip App also implements a number of other network services to facilitate business development, such as network service priority mechanism, high priority service priority to use long connections, low priority service by default using a short connection; network service dependency mechanism, depending on the relationship automatically initiated or When the network service is canceled, for example, when the main service fails, the sub-service is automatically canceled.

Development process, we also found that some mobile platforms TCP Socket development tricks:

IOS platform, the native Socket interface to create a connection does not activate the mobile network, where the native Socket interface is POSIX Socket interface, you must use the CFSocket or upper network interface to try to activate the network connection. So Ctrip will be activated when the first activation of some third-party registration SDK and send HTTP request to activate the mobile network.

The SO_NOSIGPIPE parameter closes the SIGPIPE event, and the TCP_NODELAY parameter turns off the TCP Nagle algorithm. The TCP_NODELAY parameter turns off the TCP Nagle algorithm. The TCP_NODELAY parameter turns off the TCP Nagle algorithm. The SO_NOSIGPIPE parameter is used to keep the TCP connection alive. .
Since iOS requires support for IPv6-Only networks, the native socket must support IPv6.
If you use select to handle nonblocking IO operations, ensure that different return values ​​and time-out parameters are handled correctly.

Heartbeat mechanism to maintain the availability of TCP long connections: For non-IM applications, the heartbeat mechanism is not significant, because the user will continue to trigger requests to use TCP connections, especially in the Ctrip business scenario, through data statistics found using heartbeat On the service time and success rate of minimal impact, it is now closed heartbeat mechanism. The original heartbeat mechanism is an idle TCP connection in a TCP long connection pool. A heartbeat packet is sent to the Gateway every 60 seconds, and the Gateway returns a heartbeat response packet, allowing both parties to confirm that the TCP connection is valid.
Hybrid network service optimization

Ctrip App a considerable proportion of the business is the use of Hybrid technology, running in the WebView environment, which all network services (HTTP requests) are controlled by the system, we can not control, it can not be optimized, the end to End service success rate is only about 97% (Note: here refers to the page business logic to send the network service request, rather than static resource request).

We adopt the technology called “TCP Tunnel for Hybrid” to optimize the hybrid network services. Unlike the traditional HTTP acceleration products, we do not use the method of intercepting HTTP requests to re-transmit, but in the Ctrip Hybrid framework of network services Layer to automatically switch.
sadf
As shown in the figure, the flow of the technical solution is as follows:

If the App supports TCP Tunnel for Hybrid, the Hybrid service forwards the network traffic through the Hybrid interface to the TCP network communication layer of the App Native layer. This module encapsulates the HTTP request and forwards it to the TCP Gateway as Payload of the TCP network service.

TCP Gateway will be based on the service number to determine the Hybrid forwarding service, unpacked directly after the Payload forwarded to the HTTP Gateway, the HTTP request is transparent to the HTTP Gateway, HTTP Gateway does not need to distinguish between the App directly sent to the TCP Gateway or forwarded to HTTP request;

After the back-end business service processing is complete, the HTTP response is returned to the TCP Gateway via the HTTP Gateway, which returns the HTTP response as Payload to the TCP network communication layer of the App.

TCP network communication layer will then deserialize the Payload back to the Hybrid framework, the final asynchronous callback to the Hybrid business caller. The whole process is also transparent to the caller of the Hybrid service. It does not know the existence of the TCP tunnel.

The adoption of the technology program, Ctrip App in the Hybrid business network service success rate increased to more than 99%, the average time-consuming decreased by 30%.
sdf

Overseas network service optimization

Ctrip is not currently deployed overseas IDC, overseas users need to access the use of App in the domestic IDC, the average time-consuming service was significantly higher than domestic users. We have adopted a technology called “TCP Bypass for Oversea” technology program to optimize the performance of overseas network services, mainly using Akamai’s exclusive network of overseas channels, while Ctrip domestic IDC deployment of central office equipment, the use of dedicated channel to accelerate the way Enhance the overseas user experience.

If the network service fails and the retry mechanism takes effect, the traditional Internet channel will be retried. If the Akamai channel fails, the network service will go to the Akamai channel first. Using the Akamai channel Bypass technology, the average service time was reduced by 33% compared to using traditional Internet channels only, while maintaining the success rate of network services.

Discussion on Other Network Protocols

Over the past two years our network service optimization are based on TCP protocol implementation, basically reached the optimization goals. But over the past two years the new application layer network protocol SPDY and HTTP / 2 gradually into the mainstream, UDP-based QUIC protocol also looks very interesting, worthy of follow-up research.

SPDY & HTTP / 2

SPDY is Google’s TCP-based network application layer protocol, has been developed to support the design based on SPDY results of HTTP / 2 protocol, HTTP / 2 protocol core improvement is in fact for the HTTP / 1.x impact delay performance pain points Optimize:

Header Compression: Compresses redundant HTTP request and response headers.
Supports Multiplexing: Supports multiple simultaneous requests and responses on a single TCP connection.
Maintains long connections (more thorough than HTTP / 1.x): Reduces network connection time.
Support Push: You can push the server to push the data to the client.
Official performance test results show that the use of SPDY or HTTP / 2 page load time reduced by about 30%, but this is the test results for the Web, for the App in the network services, specific optimization results we are still in-house testing, And now we use the TCP protocol optimization similar to the performance optimization performance may not be significant.

QUIC

QUIC is Google-based application layer protocol developed by UDP, UDP protocol without connection, there is no retransmission mechanism, so the application layer needs to ensure the reliability of the service. Currently domestic Tencent has tried for the weak network QUIC protocol, we are also testing, will eventually need to see if the test results.

Conclusion

Technology is only a means, and ultimately to reflect on the business results. We have achieved in addition to static resources and other needs to access the CDN network request, the other App network services using a unified TCP channel, which has better performance tuning and business monitoring capabilities. Ctrip is currently based on the TCP protocol for a variety of App network service optimization, but also a variety of technical solutions to the balance, although the HTTP / 2 and other new protocols mature, but the TCP protocol flexibility to support their own targeted performance optimization, Special advantage, hope that our practice summary of the domestic wireless technology practitioners have some reference value.

Reference

app-network-service-and-performance-optimization-of-ctrip

Using RxJava for Android Dev

Desc of RxJava

ReactiveX is a library for composing asynchronous and event-based programs by using observable sequences.

It extends the observer pattern to support sequences of data and/or events and adds operators that allow you to compose sequences together declaratively while abstracting away concerns about things like low-level threading, synchronization, thread-safety, concurrent data structures, and non-blocking I/O.

Observables fill the gap by being the ideal way to access asynchronous sequences of multiple items

single items multiple items
synchronous T getData() Iterable getData()
asynchronous Future getData() Observable getData()

Using of RxAndroid

1. Setting Up RxAndroid

To use RxAndroid in an Android Studio project, add it as a compile dependency in the app module’s build.gradle.

1
compile 'io.reactivex:rxandroid:0.25.0'

2. Basics of Observers and Observables

When working with ReactiveX, you will be using observables and observers extensively. You can think of an observable as an object that emits data and an observer as an object that consumes that data. In RxJava and RxAndroid, observers are instances of the Observer interface, and observables are instances of the Observable class.

The Observable class has many static methods, called operators, to create Observable objects. The following code shows you how to use the just operator to create a very simple Observable that emits a single String.

1
2
Observable<String> myObservable 
= Observable.just("Hello"); // Emits "Hello"

The observable we just created will emit its data only when it has at least one observer. To create an observer, you create a class that implements the Observer interface. The Observer interface has intuitively named methods to handle the different types of notifications it can receive from the observable. Here’s an observer that can print the String emitted by the observable we created earlier:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Observer<String> myObserver = new Observer<String>() {
@Override
public void onCompleted() {
// Called when the observable has no more data to emit
}

@Override
public void onError(Throwable e) {
// Called when the observable encounters an error
}

@Override
public void onNext(String s) {
// Called each time the observable emits data
Log.d("MY OBSERVER", s);
}
};

To assign an observer to an observable, you should use the subscribe method, which returns a Subscription object. The following code makes myObserver observe myObservable:

1
Subscription mySubscription = myObservable.subscribe(myObserver);

As soon as an observer is added to the observable, it emits its data. Therefore, if you execute the code now, you will see Hello printed in Android Studio’s logcat window.

You might have noticed that we didn’t use the onCompleted and the onError methods in myObserver. As these methods are often left unused, you also have the option of using the Action1 interface, which contains a single method named call.

1
2
3
4
5
6
Action1<String> myAction = new Action1<String>() {
@Override
public void call(String s) {
Log.d("My Action", s);
}
};

When you pass an instance of Action1 to the subscribe method, the call method is invoked whenever the observable emits data.

1
Subscription mySubscription = myObservable.subscribe(myAction1);

To detach an observer from its observable while the observable is still emitting data, you can call the unsubscribe method on the Subscription object.

1
mySubscription.unsubscribe();

3. Using Operators

Now that you know how to create observers and observables, let me show you how to use ReactiveX’s operators that can create, transform, and perform other operations on observables. Let’s start by creating a slightly more advanced Observable, one that emits items from an array of Integer objects. To do so, you have to use the from operator, which can generate an Observable from arrays and lists.

1
2
3
4
5
6
7
8
9
Observable<Integer> myArrayObservable 
= Observable.from(new Integer[]{1, 2, 3, 4, 5, 6}); // Emits each item of the array, one at a time

myArrayObservable.subscribe(new Action1<Integer>() {
@Override
public void call(Integer i) {
Log.d("My Action", String.valueOf(i)); // Prints the number received
}
});

When you run this code, you will see each of the numbers of the array printed one after another.

If you’re familiar with JavaScript, Ruby, or Kotlin, you might be familiar with higher-order functions such as map and filter, which can be used when working with arrays. ReactiveX has operators that can perform similar operations on observables. However, because Java 7 doesn’t have lambdas and higher-order functions, we’ll have to do it with classes that simulate lambdas. To simulate a lambda that takes one argument, you will have to create a class that implements the Func1 interface.

Here’s how you can use the map operator to square each item of myArrayObservable:

1
2
3
4
5
6
myArrayObservable = myArrayObservable.map(new Func1<Integer, Integer>() { // Input and Output are both Integer
@Override
public Integer call(Integer integer) {
return integer * integer; // Square the number
}
});

Note that the call to the map operator returns a new Observable, it doesn’t change the original Observable. If you subscribe to myArrayObservable now, you will receive squares of the numbers.

Operators can be chained. For example, the following code block uses the skip operator to skip the first two numbers, and then the filter operator to ignore odd numbers:

1
2
3
4
5
6
7
8
9
10
myArrayObservable
.skip(2) // Skip the first two items
.filter(new Func1<Integer, Boolean>() {
@Override
public Boolean call(Integer integer) { // Ignores any item that returns false
return integer % 2 == 0;
}
});

// Emits 4 and 6

4. Handling Asynchronous Jobs

The observers and observables we created in the previous sections worked on a single thread, Android’s UI thread. In this section, I will show you how to use ReactiveX to manage multiple threads and how ReactiveX solves the problem of callback hell.

Assume you have a method named fetchData that can be used to fetch data from an API. Let’s say it accepts a URL as its parameter and returns the contents of the response as a String. The following code snippet shows how it could be used.

1
2
String content = fetchData("http://www.google.com");
// fetches the contents of google.com as a String

This method needs to run on its own thread, because Android does not allow network operations on the UI thread. This means you would either create an AsyncTask or create a Thread that uses a Handler.

With ReactiveX, however, you have a third option that is slightly more concise. Using the subscribeOn and observeOn operators, you can explicitly specify which thread should run the background job and which thread should handle the user interface updates.

The following code creates a custom Observable using the create operator. When you create an Observable in this manner, you have to implement the Observable.OnSubscribe interface and control what it emits by calling the onNext, onError, and onCompleted methods yourself.

1
2
3
4
5
6
7
8
9
10
11
12
Observable<String> fetchFromGoogle = Observable.create(new Observable.OnSubscribe<String>() {
@Override
public void call(Subscriber<? super String> subscriber) {
try {
String data = fetchData("http://www.google.com");
subscriber.onNext(data); // Emit the contents of the URL
subscriber.onCompleted(); // Nothing more to emit
}catch(Exception e){
subscriber.onError(e); // In case there are network errors
}
}
});

When the Observable is ready, you can use subscribeOn and observeOn to specify the threads it should use and subscribe to it.

1
2
3
4
5
6
7
8
9
fetchFromGoogle
.subscribeOn(Schedulers.newThread()) // Create a new Thread
.observeOn(AndroidSchedulers.mainThread()) // Use the UI thread
.subscribe(new Action1<String>() {
@Override
public void call(String s) {
view.setText(view.getText() + "\n" + s); // Change a View
}
});

You might still be thinking that the reactive approach isn’t drastically better than using the AsyncTask or Handler classes. You are right, you don’t really need ReactiveX if you have to manage only one background job.

Now consider a scenario that would result in a complex codebase if you used the conventional approach. Let’s say you have to fetch data from two (or more) websites in parallel and update a View only when all the requests have completed. If you follow the conventional approach, you would have to write lots of unnecessary code to make sure that the requests completed without errors.

Consider another scenario in which you have to start a background job only after another background job has completed. Using the conventional approach, this would result in nested callbacks.

With ReactiveX’s operators, both scenarios can be handled with very little code. For example, if you have to use fetchData to fetch the contents of two websites, fore example Google and Yahoo, you would create two Observable objects, and use the subscribeOn method to make them run on different threads.

1
2
fetchFromGoogle = fetchFromGoogle.subscribeOn(Schedulers.newThread());
fetchFromYahoo = fetchFromYahoo.subscribeOn(Schedulers.newThread());

To handle the first scenario in which both requests need to run in parallel, you can use the zip operator and subscribe to the Observable it returns.

1
2
3
4
5
6
7
8
9
// Fetch from both simultaneously
Observable<String> zipped
= Observable.zip(fetchFromGoogle, fetchFromYahoo, new Func2<String, String, String>() {
@Override
public String call(String google, String yahoo) {
// Do something with the results of both threads
return google + "\n" + yahoo;
}
});

Similarly, to handle the second scenario, you can use the concat operator to run the threads one after another.

1
2
Observable<String> concatenated = Observable.concat(fetchFromGoogle, fetchFromYahoo);
// Emit the results one after another

5. Handling Events

RxAndroid has a class named ViewObservable that makes it easy to handle events associated with View objects. The following code snippet shows you how to create a ViewObservable that can be used to handle the click events of a Button.

1
2
3
4
5
Button myButton 
= (Button)findViewById(R.id.my_button); // Create a Button from a layout

Observable<OnClickEvent> clicksObservable
= ViewObservable.clicks(myButton); // Create a ViewObservable for the Button

You can now subscribe to clicksObservable and use any of the operators you learned about in the previous sections. For example, if you want your app to skip the first four clicks of the button and start responding from the fifth click onwards, you could use the following implementation:

1
2
3
4
5
6
7
8
9
clicksObservable
.skip(4) // Skip the first 4 clicks
.subscribe(new Action1<OnClickEvent>() {
@Override
public void call(OnClickEvent onClickEvent) {
Log.d("Click Action", "Clicked!");
// Printed from the fifth click onwards
}
});

##Conclusion

In this tutorial, you learned how to use ReactiveX’s observers, observables, and operators to handle multiple asynchronous operations and events. As working with ReactiveX involves functional, reactive programming, a programming paradigm most Android developers are not used to, don’t be too hard on yourself if you don’t get it right the first time. You should also know that ReactiveX code will be a lot more readable if you use a modern programming language, such as Kotlin, that supports higher-order functions.

reference articles:

ReactiveX

Getting Started With ReactiveX on Android

Analysis on Net Request for Android Dev

Desc of Net Request

It’s necessary to analysis on net reqeust for android dev. Almost all android app depend on network on data interchange.

Net Request

Net Request has three levels, simple Json reqeust, native http request and native socket request.

Simple Json or Form Request

Here, let’s retrofit async request as example.

Retrofit turns your HTTP API into a Java interface.

1
2
3
4
public interface GitHubService {
@GET("users/{user}/repos")
Call<List<Repo>> listRepos(@Path("user") String user);
}

The Retrofit class generates an implementation of the GitHubService interface.

1
2
3
4
5
Retrofit retrofit = new Retrofit.Builder()
.baseUrl("https://api.github.com/")
.build();

GitHubService service = retrofit.create(GitHubService.class);

Each Call from the created GitHubService can make a synchronous or asynchronous HTTP request to the remote webserver.

1
Call<List<Repo>> repos = service.listRepos("octocat");

Use annotations to describe the HTTP request:

  • URL parameter replacement and query parameter support

  • Object conversion to request body (e.g., JSON, protocol buffers)

  • Multipart request body and file upload

Form request as follows:

FORM ENCODED AND MULTIPART

Methods can also be declared to send form-encoded and multipart data.

Form-encoded data is sent when

is present on the method. Each key-value pair is annotated with ```@Field``` containing the name and the object providing the value.
1
2
3
4
5

```java
@FormUrlEncoded
@POST("user/edit")
Call<User> updateUser(@Field("first_name") String first, @Field("last_name") String last);

Multipart requests are used when

is present on the method. Parts are declared using the ```@Part``` annotation.
1
2
3
4
5

```java
@Multipart
@PUT("user/photo")
Call<User> updateUser(@Part("photo") RequestBody photo, @Part("description") RequestBody description);

Native Http Request

Native http request allows you composing http packet and upload to server. Here let’s uploading a image as a example. codes as foloows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
FileInputStream fis = null;
DataOutputStream dos = null;
InputStream responseIs = null;
try {
operateImage = images[0];
Uri uri = Uri.parse(operateImage.getResUri());
String path = uri.getHost() + uri.getPath();
File file = new File(path);
fis = new FileInputStream(file);

String end = "\r\n";
String twoHyphens = "--";
String boundary = "******";

URL url = new URL(urlString);
HttpURLConnection httpURLConnection = (HttpURLConnection) url.openConnection();
/**
* setting http packet header
*/
// setting the packet size, in case of phone crashing for out memory
//httpURLConnection.setChunkedStreamingMode(1280 * 1024);// 1280K
// allow input and output
httpURLConnection.setDoInput(true);
httpURLConnection.setDoOutput(true);
httpURLConnection.setUseCaches(false);

/**
* setting fields
*/
httpURLConnection.setRequestMethod("POST");
httpURLConnection.setRequestProperty("Connection", "Keep-Alive");
httpURLConnection.setRequestProperty("Charset", "UTF-8");
httpURLConnection.setRequestProperty("Content-Type",
"multipart/form-data;boundary=" + boundary);
httpURLConnection.setConnectTimeout(10 * 1000);

// construct the data
StringBuilder textEntity = new StringBuilder();
if (params == null) {
params = new HashMap<>(0);
}
Iterator<String> iterator = params.keySet().iterator();
while (iterator.hasNext()) {
String key = iterator.next();
Object value = params.get(key);
textEntity.append(twoHyphens + boundary + end);
textEntity.append("Content-Disposition:form-data;name=" + key + end + end);
textEntity.append(value.toString());
textEntity.append(end);
}
textEntity.append(twoHyphens + boundary + end);
textEntity.append("Content-Disposition:form-data;" + "name=\"" + paramsFilename + "\";filename=\"" + file.getName()
+ "\"" + end);
textEntity.append(end);

/**
* getting http connection
*/
dos = new DataOutputStream(httpURLConnection.getOutputStream());
byte[] text = textEntity.toString().getBytes();

/**
* setting http body
*/
dos.write(text);

int totalSize = fis.available();
int progressSize = 0;

int bufferSize = 1024 * 10;
byte[] buffer = new byte[bufferSize];
int length = -1;
while ((length = fis.read(buffer)) != -1) {
dos.write(buffer, 0, length);
progressSize += length;
publishProgress(progressSize, totalSize);

}
dos.writeBytes(end);
dos.writeBytes(twoHyphens + boundary + twoHyphens + end);
fis.close();
dos.flush();

responseIs = httpURLConnection.getInputStream();
String response = readStreamToByteArray(responseIs);
return response;
} catch (IOException e) {
e.printStackTrace();
} finally {
if (fis != null) {
try {
fis.close();
} catch (IOException e) {
e.printStackTrace();
fis = null;
}
}

if (dos != null) {
try {
dos.close();
} catch (IOException e) {
e.printStackTrace();
dos = null;
}
}

if (responseIs != null) {
try {
responseIs.close();
} catch (IOException e) {
e.printStackTrace();
responseIs = null;
}
}
}

Native TCP/IP Request

Here let’s socket program as example to show the native tcp/ip request.

UDP

udp sever codes as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.SocketException;
public class UdpServer {
public static void main(String[] args) {
// declare UPD socket,DatagramSocket
DatagramSocket socket = null;
try {
// port
socket = new DatagramSocket(1234);
// buffer
byte data[] = new byte[512];
// write data
DatagramPacket packet = new DatagramPacket(data, data.length);
// block to receive msg from client
socket.receive(packet);

String msg = new String(packet.getData(), packet.getOffset(),
packet.getLength());
System.out.println(msg);
} catch (SocketException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
socket.close();
}
}
}

udp client codes as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.net.SocketException;
public class UdpClient {
public static void main(String[] args) {
DatagramSocket socket = null;
String msg = null;
try {
socket = new DatagramSocket();
// read from standard input
BufferedReader reader = new BufferedReader(new InputStreamReader(
System.in));
while (!(msg = reader.readLine()).equalsIgnoreCase("exit")) {
// contruct to server
InetAddress serverAddress = InetAddress.getByName("127.0.0.1");
//
DatagramPacket packet = new DatagramPacket(msg.getBytes(),
msg.getBytes().length, serverAddress, 1234);
// send msg
socket.send(packet);
}
} catch (SocketException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
socket.close();
}
}
}

TCP

tcp server codes as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import java.io.IOException;
import java.io.InputStream;
import java.net.ServerSocket;
import java.net.Socket;
public class TcpServer {
public static void main(String[] args) {
// declare a server socket
ServerSocket serverSocket = null;
// declare a socket watting for client conn
Socket socket = null;
try {
int temp;
// port
serverSocket = new ServerSocket(5937);
// block to conn
socket = serverSocket.accept();
// get input from client
InputStream inputStream = socket.getInputStream();
// reading and print
byte[] buffer = new byte[512];
while ((temp = inputStream.read(buffer)) != -1) {
System.out.println(new String(buffer, 0, temp));
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
try {
socket.close();
serverSocket.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}

tcp client as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
package me.bym.tcp;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.Socket;
import java.net.UnknownHostException;
public class TcpClient {
public static void main(String[] args) {
// declare a socket
Socket socket = null;
try {
String msg = null;
// connect to server
socket = new Socket("127.0.0.1", 5937);
// get input from keyboard (standard input)
BufferedReader reader = new BufferedReader(new InputStreamReader(System.in));
OutputStream outputStream = socket.getOutputStream();
while (!(msg = reader.readLine()).equalsIgnoreCase("exit")) {
outputStream.write(msg.getBytes());
}
} catch (UnknownHostException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
try {
socket.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}

}

By the way, you can use other framework to contruct tcp/ip server and client, likeNetty.

Getting Start Building Android App with React Native

Introduce

React (sometimes styled React.js or ReactJS) is an open-source JavaScript library providing a view for data rendered as HTML. React views are typically rendered using components that contain additional components specified as custom HTML tags. React promises programmers a model in which subcomponents cannot directly affect enclosing components (“data flows down”); efficient updating of the HTML document when data changes; and a clean separation between components on a modern single-page application.

It is maintained by Facebook, Instagram and a community of individual developers and corporations.According to JavaScript analytics service Libscore, React is currently being used on the websites of Netflix, Imgur, Bleacher Report, Feedly, Airbnb, SeatGeek, HelloSign, and others.

Getting Started

Installing Dependencies

We recommend installing Node.js and Python2 via Chocolatey, a popular package manager for Windows. Open a Command Prompt as Administrator, then run:

1
2
choco install nodejs.install
choco install python2

The React Native CLI

Node comes with npm, which lets you install the React Native command line interface.

1
npm install -g react-native-cli

Constructing Android Development Environment

Setting up your development environment can be somewhat tedious if you’re new to Android development. If you’re already familiar with Android development, there are a few things you may need to configure. In either case, please make sure to carefully follow the next few steps.

Testing your React Native Installation

Use the React Native command line interface to generate a new React Native project called “AwesomeProject”, then run react-native run-android inside the newly created folder.

1
2
3
react-native init AwesomeProject
cd AwesomeProject
react-native run-android

If everything is set up correctly, you should see your new app running in your Android emulator shortly.

you can modify the app by change the react

HelloWorld for react-native codes as follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14

```javascript
import React, { Component } from 'react';
import { AppRegistry, Text } from 'react-native';

class HelloWorldApp extends Component {
render() {
return (
<Text>Hello world!</Text>
);
}
}

AppRegistry.registerComponent('HelloWorldApp', () => HelloWorldApp);

Alibaba team creates another tools to build apk, called weex

referece articles:

react wiki)

react-native

weex

Thinking in High Concurrency

One word: points

  • Divide and rule, multi-level shunt

  • Browser-side, server front-end, middle layer, database side

  • Everywhere there is the possibility of diversion

Some things about high concurrency dev

reference
Recently, various IT media industry technology conference held a lot of sites are in the disclosure of their own technology to share with insiders, to facebook, Baidu, small to the start of the site. Facebook, Baidu and other large sites using the technology and extraordinary processing power really gives a fresh feeling, but not every site is like Facebook, Baidu has hundreds of millions of users to access traffic, there is a huge amount of data needs to be stored, Need to use mapreduce / parallel computing, HBase / column storage of these technologies is not. Technical means has always been operational support for the current operating environment is like, there is no need to have to catch a fashionable, be sure to have a popular technology and the relationship between the point was let go.

In the recent technical conference, we have more eyes are focused on these large sites, in fact, small and medium-sized portal technology system is worth to explore and concern. Siege division is not all in the world for these large portal services, more siege division are unknown for some just started small and medium-sized Web site services, and occupy the siege division in more than 60% of the team Of the population. In the large-scale portals concerned about the time, small and medium-sized portal technology development and practical experience more worth to share.

Both large portals and small to medium sized vertical types of sites are driven by stability, performance, and scalability. Large-scale site technology experience sharing is worth learning and borrowing, but the implementation of the more specific practice is not applicable to all sites, other language development sites I dare not say, but Java development system, I can You plug in a few words:

JVM

JEE container to run the JVM parameters The correct use of configuration parameters directly related to the performance of the entire system and processing power, JVM tuning is mainly on the memory management aspects of optimization, optimization direction is divided into the following four points:

  1. HeapSize heap size, it can be said that the Java virtual machine to use the memory strategy, this is very critical.
  2. GarbageCollector through the configuration parameters related to Java garbage collector in the four algorithms (strategy) to use.
  3. StackSize JVM stack is the memory instruction area, each thread has his own Stack, Stack size limits the number of threads.
  4. DeBug / Log In the JVM can also set the JVM run-time log and JVM hang log output, this is very critical, according to various types of JVM log output to configure the appropriate parameters.
    JVM configuration skills can be seen everywhere on the Internet, but I still recommend reading the official Sun article 2, you can configure the parameters of its still have an understanding
  5. Java HotSpot VM Options
    Http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
  6. Troubleshoot Guide for Java SE 6 with HotSpot VM http://www.oracle.com/technetwork/java/javase/index-137495.html
    In addition, I believe that not every siege division is facing these JVM parameters every day, if you forget those key parameters you can enter Java-X (uppercase X) to prompt.

JDBC

JDBC parameters for MySQL In the previous article also introduced in a single machine or cluster environment reasonable use of JDBC configuration parameters on the operation of the database also has a great impact.
Some of the so-called high-performance Java ORM open source framework is open a lot of JDBC in the default parameters:

  1. For example: autoReconnect, prepStmtCacheSize, cachePrepStmts, useNewIO, blobSendChunkSize,
  2. For example, cluster environment: roundRobinLoadBalance, failOverReadOnly, autoReconnectForPools, secondsBeforeRetryMaster.
    The specific content can refer to the MySQL JDBC official manual:
    Http://dev.mysql.com/doc/refman/5.1/en/connectors.html#cj-jdbc-reference

Database connection pool (DataSource)

Frequent interactions between application and database connections can create bottlenecks and a lot of overhead on system performance. The JDBC connection pool is responsible for allocating, managing, and releasing database connections. It allows an application to reuse an existing database connection, And not to re-establish a connection, so the application does not require frequent connections with the database switch, and can free up the free time than the maximum free time database connection to avoid the release of the database connection caused by missing database connection. This technology can significantly improve the performance of the database operation.
Here I think there is little need to explain:
The use of connection pooling is the need to close because the database connection pool to start when the database and pre-obtained the appropriate connection, and then no longer need to deal directly with the database application, because the application using the database connection pool is a “borrow” The application from the database connection pool access to resources is “lending”, also need to go back, like there are 20 buckets on here, people need to take the water can use these barrels from the pool inside the water, If 20 people take the water, do not return the bucket back to the original place, then the back of the people and then need to take the water, can only be in the next waiting for someone to return the barrel, before the people need to put back, or the back Of the people will have to wait, resulting in resource blockage, the same token, the application access to the database connection when the Connection connection object from the “pool” in the distribution of a database connection out after the return of the database connection, so as to Maintain the database connection “there are also” guidelines.
References:
ref

Data access

Database server optimization and data access, what type of data on where it is better to think about the problem, the future storage is likely to be mixed, Cache, NOSQL, DFS, DataBase in a system will have, Life tableware and weekdays need to wear clothes at home, but will not use the same type of furniture storage, looks like no one to put the tableware and clothes in the same cabinet inside. This is like a system of different types of data, as different types of data need to use the appropriate storage environment. File and image storage, the first visit in accordance with the heat classification, or in accordance with the size of the file. Strong relationship type and need to support the use of traditional transactional database, the weak relationship does not require transaction support can be considered NOSQL, massive file storage can be considered to support network storage DFS, the cache depends on your single data storage size and read and write proportion.

Another point worth noting is the separation of data read and write, both in the DataBase or NOSQL environment, most of the time is greater than the write, so the design need to consider not only need to read the data scattered in multiple machines, but also need Consider the data consistency between multiple machines, MySQL, a main and more from, plus MySQL-Proxy or to borrow some parameters in JDBC (roundRobinLoadBalance, failOverReadOnly, autoReconnectForPools, secondsBeforeRetryMaster) for follow-up application development, you can read and Write separation, the pressure will be a lot of reading scattered on multiple machines, and also to ensure the consistency of the data.

Cache

In general, the cache is generally divided into two kinds: the local cache and distributed cache

  1. Local cache, the local cache for Java is to say the data into static (static) data combination, and then need to use when the data from the static out of the combination of the proposed high ConcurrentHashMap or CopyOnWriteArrayList As a local cache. More specific use of the cache is the use of system memory, the use of the number of memory resources need to have an appropriate ratio, if more than the appropriate use of storage access, will be counterproductive, resulting in inefficient operation of the entire system.

  2. Distributed cache, generally used for distributed environment, the cache on each machine for centralized storage, and not only for the use of the scope of the cache can also be used as a distributed system data synchronization / transmission of a Means, the most commonly used is Memcached and Redis.

Data storage in different media read / write get efficiency is different in the system how to use the cache, so that your data closer to the cpu, the following a picture you need to always keep in mind, the technology from Google Daniel Jeff Dean (Ref)’s masterpiece, as shown:

Cache-speed

Concurrent / multithreading

In a high concurrency environment, developers are advised to use the JDK in the accompanying package (java.util.concurrent), after the JDK1.5 use java.util.concurrent tools can simplify the development of multi-threaded, java.util. Concurrent tools are mainly divided into the following main parts:

  1. Thread pool, thread pool interface (Executor, ExecutorService) and implementation class (ThreadPoolExecutor, ScheduledThreadPoolExecutor), using jdk thread pool framework can manage their own queuing and scheduling tasks, and allow controlled shutdown. Because the need to run a thread to consume the system CPU resources, and create, end a thread CPU system resources are also overhead, the use of thread pool can not only effectively manage the use of multi-threaded, or can improve the efficiency of thread operation.

  2. Local queues, providing efficient, scalable, thread-safe non-blocking FIFO queues. Each of the five implementations in java.util.concurrent supports the extended BlockingQueue interface, which defines the blocking versions of put and take: LinkedBlockingQueue, ArrayBlockingQueue, SynchronousQueue, PriorityBlockingQueue, and DelayQueue. These different classes cover the most common use contexts of producer-consumer, message passing, parallel task execution, and related concurrency design.

  3. Synchronizer, four classes can assist in the realization of common private synchronization statement. Semaphore is a classic concurrency tool. CountDownLatch is an extremely simple but extremely useful utility for blocking execution until a given number of signals, events, or conditions are maintained. CyclicBarrier is a resettable multiplexing point that is useful in some parallel programming styles. Exchanger allows two threads to exchange objects at the collection point, which is useful in multi-pipeline design.

  4. Concurrent package Collection, this package also provides for the design of the context of the multi-threaded Collection implementation: ConcurrentHashMap, ConcurrentSkipListMap, ConcurrentSkipListSet, CopyOnWriteArrayList and CopyOnWriteArraySet. ConcurrentHashMap usually outperforms synchronized HashMap when many threads are expected to access a given collection, and ConcurrentSkipListMap is usually superior to a synchronized TreeMap. CopyOnWriteArrayList outperforms synchronized ArrayList when the expected reading and traversal are much larger than the list’s update count.

queue

On the queue can be divided into: the local queue and distributed queue 2 categories

Local queue: the common common for non-timeliness of the data batch write, you can cache the data in an array to achieve a certain number of times when a batch write, you can use BlockingQueue or List / Map to achieve.

Related Information: Sun Java API.
Distributed Queue: General as a message middleware, build distributed environment subsystem and subsystem communication between the bridge, JEE environment is the most used Apache AvtiveMQ and Sun’s OpenMQ.

Lightweight MQ middleware has also been introduced before, for example: Kestrel and Redis (Ref http://www.javabloger.com/article/mq-kestrel-redis-for-java.html), recently heard LinkedIn’s search technology team has introduced an MQ product, kaukaf (Ref http://sna-projects.com/kafka), to stay focused.

Relevant information:

  1. ActiveMQ http://activemq.apache.org/getting-started.html

  2. OpenMQ http://mq.java.net/about.html

  3. Kafka http://sna-projects.com/kafka

  4. JMS article http://www.javabloger.com/article/category/jms

NIO

NIO is in JDK1.4 after the version of the emergence of the Java 1.4 before, Jdk are provided for the flow of I / O systems, such as read / write file is a one-byte data processing, an input stream One byte of data, one output stream consumes one byte of data, stream-oriented I / O is very slow, and one packet or the entire datagram has been received, or not yet. Java NIO non-blocking technology is to take Reactor mode, the content will come in automatic notification, do not die, death cycle, greatly enhance the system performance. NIO technology in the real scene most of the use of two aspects, 1 is the file read and write operations, 2 is the network data flow operation. There are several core objects in the NIO need to master: 1 selector (Selector), 2 channel (Channel), 3 buffer (Buffer).

My nonsense:

  1. Java NIO technology in the category of memory-mapped file is an efficient approach can be used in the cache stored in the cold / hot data separation, the cache part of the cold data for such treatment, this approach than the conventional Based on the flow or channel-based I / O more quickly by making the data in the file appears as the contents of the memory array to complete the actual reading of the file or write part of the map will be mapped to memory, not The entire file is read into memory.

  2. Mysql jdbc in the drive can also use NIO technology to operate the database to enhance system performance.

Long connection / Servlet3.0

Here is a long connection that long polling, the previous browser (client) need to pay attention to the server-side data changes occur need to constantly access the server, so that the number of clients will inevitably give a lot of pressure on the server side, for example: In-site messages in the forum. Servlet3.0 specification now provides a new feature: asynchronous IO communication; this feature will maintain a long connection. The use of Servlet3 asynchronous request of this technology can greatly ease the pressure on the server side.

Servlet3.0 principle is to request the request to open a thread to suspend, the middle set to wait for the time-out, if the background event trigger request request, the results returned to the client’s request, if the time set in the waiting time without any The event will also request to return to the client, the client will again request the request, the client and server-side interaction can reciprocate.

If you come over and told me that if someone is looking for you, I immediately inform you that you come to see him, the original you need to constantly ask if I have to find you, regardless of whether there are people looking for you, you need constant Ask me if I have to find you, so ask the person or ask people who will be exhausted.

Log

Log4J is commonly used tools, the system is in the line just when the log is generally set in the INFO level, the real on the general set in the ERROR level, but at any time, the log input content is the need to focus on the development Personnel can generally rely on the output log to find problems or rely on the output of the log to optimize the performance of the system, the system is running the log is the basis for reporting and troubleshooting.
In short, the log according to the definition of the different strategies and levels of output to different environments, so as to facilitate our analysis and management. On the contrary, you do not have the output of the strategy, then the machine more than one, over time, there will be a big mess of chaos log, you will not have time to start troubleshooting, so the log output strategy is to use log key points.

Reference: ref

Packaging / deployment

In the code design time it is best to different types of function modules in the IDE environment, coarse-grained into different projects, easy to play into different jar package deployed in different environments. There is such an application scenario: the need for regular daily remote access from the SP side of the day 100 news and part of the city’s weather forecast, although the amount of data per day is not much, but the front-end access to a large amount of clear, clearly in the system architecture Do read and write separation.

If the web project and the timing of the function of the module is fully concentrated in a project package, will lead to the need to expand each machine when there are both web applications and timers, because the functional modules are not separated, each machine has a timer Work will result in duplication of data inside the database.

If the development of the web and the timer will be divided into two projects, when the package can be deployed separately, 10 web corresponding to a regular device, decomposition of the front-end request pressure, the data will not be written to repeat.

Another advantage of this can be shared, in the above scenario, the web and the timer needs to be read on the database, then the web and timer projects have the operation of the database code, the logic of the code or feel chaotic messy. If you draw a DAL layer jar, web and timer application module developers need only reference the DAL layer jar, the development of business logic, interface-oriented programming, regardless of the specific database operation, the specific database operation by the other Developers to complete, you can in the development of the division of labor is very clear, and non-interference.

frame

The so-called popular SSH (Struts / Spring / Hiberanet) lightweight framework, for many small and medium-sized projects that are not lightweight, the developer not only need to maintain the code, also need to maintain cumbersome xml configuration file, and maybe Not written on the configuration file so that the whole project can not run. No configuration file can replace the SSH (struts / Spring / Hiberanet) framework of the product is really too much, I have introduced to you before a number of products (Ref).

I do not mean to use SSH (Struts / Spring / Hiberanet) framework, in my view SSH framework really is the normative development, and do not use the SSH (Struts / Spring / Hiberanet) framework can improve the number of performance.

SSH framework is only for a very large number of projects hundreds of teams, also need to continue to increase the size of the team, is the need to select some of the market are recognized and familiar with the technology, SSH (Struts / Spring / Hiberanet) framework More mature so it is the first product.

But for some of the small team in the middle of the technical team of experts can choose a more concise framework, the real speed of your development efficiency, early rejection of the SSH framework to choose a more concise technology in small team development is a A more informed choice.