ES6 destructing and spread

http://www.typescriptlang.org/docs/handbook/variable-declarations.html#destructuring

destructuring is to auto map the source to the assigned variable, whether being array or object or tuple.


const [a,b,c] = ...

const {a,b,c} = …

const [a,b,c]= …

spread is to auto map any number of items in the array to a spread variable, vice versa. basically to repsent array as one (spread) variable.


const [a, ...b] = ...
const x = [a, ...b]

and in order to destruct array of objects

let o = [
{
a: "foo",
b: 12,
c: "bar"
},
{
a: "test",
b: 20,
c: "check"
}];

let [{a}] = o ====> “foo”

let [{a: a1},{a: a2}] = o ====> “foo”, test

publish vscode extension with bundled in images/gif

its by default vscode expect a public git repository for referring the code and even images to be shown in the markdown.

an alternative, workable solution to enable for the bundled in images is through publish a pre-packaged vsix.

You can override that behavior and/or set it by using the –baseContentUrl and –baseImagesUrl flags when running vsce package. Then publish the extension by passing the path to the packaged .vsix file as an argument to vsce publish.

https://code.visualstudio.com/api/working-with-extensions/publishing-extension

: in typescript destructing

tsc keep complaning about this code of implicit any type:

const updateFile = (type: any, { fsPath: filepath }) => {
.....
}

turns out the : within the destructing is for aliasing.

so the code need to change to

const updateFile = (type: any, { fsPath: filepath }: {fsPath: any}) => {
...
}

quote:

As you can see, component is implicitly any. The : inside of your destructuring assignment in the function arguments is aliasing rather than functioning as a type signature. You could correct it by doing the following:

不要浪费时间写完美代码(你认同么?)

An interesting read: clean code vs perfect code vs pragmatic coding vs refactoring.

I guess one point missing from the article is: It’s easier for a good developer to write a near perfect coding than an average developer to write clean code.

And the dilemma is, without advocating for high quality coding, developer’s skill wont improve.

Utlimately, it’s the coding skills.

https://mp.weixin.qq.com/s/YTLn8EaDfojS9s5rgbw2ow

http://swreflections.blogspot.com/2014/11/dont-waste-time-writing-perfect-code.html?m=1

https://martinfowler.com/articles/workflowsOfRefactoring/#tension

VSCode extension command not found

been seeing a lot command not found issue duringt the vscode extension development.

after all the code to register the command, and expose to menu and keybindings, turns out it needs to be activated as well (hence to call the register command):

https://stackoverflow.com/a/57620155/410289


We still need to call registerCommand to actually tie the command id to the handler. This means that if the user selects the myExtension.sayHello command from the Command Palette but our extension has not been activated yet, nothing will happen. To prevent this, extensions must register an onCommand activationEvent for all user facing commands:

{
"activationEvents": ["onCommand:myExtension.sayHello"]
}
Now when a user first invokes the myExtension.sayHello command from the Command Palette or through a keybinding, the extension will be activated and registerCommand will bind myExtension.sayHello to the proper handler.

You do not need an onCommand activation event for internal commands but you must define them for any commands that:

Can be invoked using the Command Palette.
Can be invoked using a keybinding.
Can be invoked through the VS Code UI, such as through the editor title bar.
Is intended as an API for other extensions to consume.

https://code.visualstudio.com/api/extension-guides/command

django migrate database

after creating the database schema (models), it’s required to include the models/schema into the INSTALLED_APPS in the setting to make the migration work:


Using models¶
Once you have defined your models, you need to tell Django you’re going to use those models. Do this by editing your settings file and changing the INSTALLED_APPS setting to add the name of the module that contains your models.py.

For example, if the models for your application live in the module myapp.models (the package structure that is created for an application by the manage.py startapp script), INSTALLED_APPS should read, in part:

INSTALLED_APPS = [
#...
'myapp',
#...
]
When you add new apps to INSTALLED_APPS, be sure to run manage.py migrate, optionally making migrations for them first with manage.py makemigrations.

ERR_SSL_PROTOCOL_ERROR

one common reason for above error could be due to the HTS (HTTP Strict Transport Security: a way for sites to elect to always use HTTPS). which is the browser always enforce the https protocol for the domain regardless requested.

to disable this in chrome, go to chrome://net-internals/#hsts

1. confirm the domain is already included for HTS
Query HSTS/PKP domain
Input a domain name to query the current HSTS/PKP set:

Domain: (the domain facing the issue)

Screenshot 2019-08-12 at 9.48.21 PM

2. then delete the domain
Delete domain security policies
Input a domain name to delete its dynamic domain security policies (HSTS and Expect-CT). (You cannot delete preloaded entries.):
Screenshot 2019-08-12 at 9.49.06 PM

a peek of python’s state

python is growing popular, personally, mainly due to it’s lower entry barrier. however, the lower entry is in existence partially due to it has historically (“not yet”) never been extremly cautiously designed.
while a lot mature languages has a big community/collective intelligence to form the princeples/guidance before the features/design/establish of implemetations, which secured a robust/stable/scablable language and ecosystem, python is not born nor in existance like that.

it’s easy to start with, but not equally means good to start building on. just a persoanl thought at the momoent.

a peek of the depency mangement state alone(with only two versions of the python at the moment):

python_environment

(i am happy to build on and with python, however, just 2 cents, it’s not yet ready for all entreprise.)

timeout on android emulator

i was keep encountering this exception while pushing the code to android emulator:

47856057-7f798400-ddfb-11e8-9897-3c3914be463d
Annotation 2019-06-27 150813
similar to this
https://github.com/react-community/create-react-native-app/issues/144

turned out the solution was due to the firewall. after having added all android apps (android studio, emulator.exe, abd, avd etc) on the exception list of the firewall, the issue was sorted out.

Tools & Dev Ops & Middleware

1. ivy vs maven
Ivy is dedicated as a repository for dependency hosting/management. Ivy is normally working with ant.
Maven is more than dependency management, however, has become one of the most popular dependency repository.

2. make vs ant vs maven vs gradle

make is the dinosaur age build tool, was since 1960s.
ant is also dedicated build tool, which create “target” to run using the ant libraries (java).
maven has another hat of being a build tool. it starts with settings.xml(the repository locations) and pom.xml (the individual configuration for project).
one advantages maven over ant is, maven has defined a lot conventions (which become kind of standard and provides a lot convenience, similar to spring’s being “opinionated”.) so instead of telling ant, where is the class & resources to compile from and into, maven comes with default compile command which works unless you have different than convention folder structure, (which then can be simply configured in pom.xml to tell maven).

+---src
|   +---main
|   |   +---java
|   |   |   \---com
|   |   |       \---best2lwjj
|   |   |           \---services
|   |   |                   Super.java
|   |   |                   
|   |   \---resources
|   \---test
|       +---java
|       \---resources

gradle a new challenger, which features groovy scripts instead of xml. (build.gradle & settings.gradle) which provides “unlimited” functionality conveniently. (instead of build a library or maven plugin, build.gradle can be written using a full functioning groovy language.)

3. CD & CI: jenkins vs hudson
both actually come from same origin. Hudson is first started by SUN (before being acquired by Oracle many years back). It was open source before.
After oracle bought over, the open source community since has moved to create Jenkins from the same original source code. (Jenkins become way more popular now.)

both and same as other CI & CD tools basically polling the source code repo (being svn, cvs or git); then take corresponding actions (configurable), such as ant compile/maven build/gradle integration testing etc….

4. CI & CD with gitlab
git become more and more popular. (git being another project from the Linus Torvalds, has borrowed a lot, like file system from Linux.)

one difference from other CI tools, gitlab enables customized gitlab runner, which is a dedicated server/servers to run certain tasks (configurable through tags). this creates a lot possibilities.
for example, one thing i have done in goldman was, to create a new pipeline, which was able to pick up code changes from feature branch, compile, test, integration test, build, package, put onto cloud/repository, deploy it and restart the server. all in one go, without single manual intervention.
This become possible with the gitlab tags and runners.

5. gitlab
normally master/ rc-xx / release-xx are feature branches, which are protected and monitored for automations/CI/CDs.

6. common issue with maven dependencies

https://stackoverflow.com/questions/4701532/force-maven-update/9697970#9697970

Front End

1. redux flux

flux-528x174

Flux:
flux

redux
redux.png

2. jsx virtual dom
20160604082917065

3. reactJs redux
20180530214840982

basically similar to the flux flow(redux is one flux implementation), in (react) redux, component (user action, for example click a button) would trigger/generate an action, the action are being wrapped(like a container) within redux, it has a list registered listener (reducer) would act on the action (in redux, action is a noun, like a domain object, to tell the type of the action, and any additional parameters), reducer could 1) construct a new state (from existing + the new action), then store them in the store; 2) or with thunk, if could really take some action, for example, call a rest service, then generate a new state
after the new state, (which is always maintained/persisted in the store), the interested component could listen (observer) to these state (mapStateToProps), then update the component (display) correspondingly.

20180530223517561

20151210234529139

4. set up store, reducer, thunk

const store = createStore(
  reducer,
  applyMiddleware(thunk)
);

5. thunk
basically, “plain” redux, (supposed to be pure), only take in plain action (as a domain object, POJO); however, there are cases, it requires to run certain actions, (calling web service), with thunk, the “action” could be a function which then return a action(the real pojo).

export function getPosts_actionCreator_Thunk() {
  return function(dispatch) {
    return fetch("https://lwpro2.wordpress.com/super/posts")
      .then(response => response.json())
      .then(json => {
        dispatch({ type: "POST_LOADED", payload: json });
      });
  };
}

6. saga
sage is another way to implement what thunk’s. to iterate, Redux does not understand other types of action than a plain object.(for redux, actions is plain domain object).
Thunk is a middleware to extend redux, so that a function (thunk function) is put into the action creator, which could be run, and then return the action (domain object).
While saga is working in another approach. Saga is like an interceptor. So the original action creator is same, which simply return a plain domain action. However, saga can intercept all actions, if it’s saga’s interested actions (listener/observer pattern), it then intercept into its logic (the call of web service for example).

export function getPosts_actionCreator_original() {
  return { type: "POST_LOADED" };
}

export function loadPost_realFunction_original() {
    return fetch("https://lwpro2.wordpress.com/super/posts")
      .then(response => response.json());
}

import { takeEvery, call, put } from "redux-saga/effects";
export default function* interceptor_observer_Saga() {
  yield takeEvery("POST_LOADED", workerSaga);
}

function* workerSaga() {
  try {
    const payload = yield call(loadPost_realFunction_original);
    yield put({ type: "POST_LOADED", payload });
  } catch (e) {
    yield put({ type: "POST_LOAD_ERROR", payload: e });
  }
}

frameworks

1. Spring DI IoC
without frameworks, normally to invoke methods on another class (loose coupling), we need to create an instance of the class, then call the method.
IoC, inversion of control, is pre set up this for us, so called inversion of control. instead we call that method on that object, the object is prepared and injected for us.

DI is one implementation of IoC. Both spring and Guice are DI frameworks.
This saves a lot of efforts for developers, the cost is slower app start up time (so the effort/time is brought forward to spend). This is initial reason why spring started becoming popular even during webwork/struts days.

2. @autowired
spring used to use XML for bean creation, and DI. basically tell what’s the beans/class/Objects to be managed by spring bean factory/context. and what’s needed fields/beans for the beans to be created. Spring library would read these configuration and do the set up job.

with the annotation (using reflection) becomes popular, this saves efforts to maintain separate XML files, (at the same time, become less maintainable). So the XML configuration start having the annotation equivalent. @Auowired is the equivalent of telling needed constructs for bean creations in XML.

3. Spring MVC
this is the spring implementation of MVC pattern, corresponding to structs and java default servlet and other framework. basically to tell what’s the URL mapped to which class, and what’s the result(view), with the model passing around. (in XML)

with annotation this become, @RestController @Controller @RequestMapping @RequestParamter etc

4. thread safety in Spring
by default, spring beans are singleton, though this can be changed to be prototype (one bean per each request)

singleton beans are not thread safe, same as servlet, which is shared among requests/threads.

5. spring AOP
AOP is for those code cross cutting or scattering. there are a lot common tasks, like audit or access control, which without AOP, could be duplicating the same code in multiple different places
AOP basically define the point cut (places), cross point and advice (the real business to be done, like audit /access control)
Spring has mostly two proxy mechanism for AOP, default JDK dhynamic proxy or CGLIB (JDK is preferred by spring for performance consideration)

6. spring boot
spring boot trying to make the developers’ job easier, by providing a lot defaults (configuration over convention)
instead of XML, or a lot annotations, Spring boot assume conventions (and library on classpath)to preset up

7. hibernate
new Java EE ORM implemntation (no more EJBs), JPA, has same origin as new hibernate implementation. so there are a lot commonality.
Hibernate probably is the most popular JPA provider for now

8. myBatis
While hibernate translate between objects and SQLs, where we call objects create/insert/update/delete actually call the hbernate generated SQL to be run on RDBMS (or other DB);
myBatis is more for the reverse. if the DB is old, not possible to change schema, direct SQL is preferred than leaving to Hibernate to construct simple queries.
Even though hibernate has its own HQL, mybatis is since beginning more on the direct SQL type

9. apache camel vs spring integration
both are EIP. Apache Camel has a massive of supported start points/URI, like HTTP, MQ, Timer etc.
With Spring framework become more and more popular, it started to incorporate various other functionality (besides DI), Spring integration is Spring’s counterparts for Camel.

JVM

  • 1. reflection
  • java reflection is to get and modify existing class, method, field behavior, like access modifier.
    To note, this is an expensive operation.

  • 2. serialization
  • Serializable is a marker interface. ObjectOutputStream’s writeObject method is for serialization and ObjectInputStream’s readObject is for deserialization. These methods could be override to provide customized serialization.

    Transient variable will be ignored during serialization.

  • 3. dynamic proxy
  • this follows the proxy pattern, kind of adding a proxy/facade before the read method invocation.

  • 4. clone
  • default clone method (native method in Object) is shallow copy; which is primitive types are cloned with value; objects are
    “cloned with value (memory address)”

    to do deep clone, serialization is one approach. another approach is to override the clone method for object variables.

  • 5. memory
  • stack is for methods runs, each method corresponding to one stack frame added ontop of each other, and removed once the method finish;
    local primitive methods during the method invocation are stored on stack, and cleared once method finish; while object created/referred in methods are put onto heap

    Stack can be set up during JVM start up using -Xss (maximum for stack memory)
    one example for possible stack overflow is, for recursive function calls, if it went too deep, it could cause stackOverFlow

    heap is the main memory portion within JVM, can be divided into two parts Yong Generation (Eden + S1 + S2/Survivor) and Old Generation/Tenured space

    Before Java 8, it’s using PermGen to store class info, metadata, and string pools. After java 8, permGen is removed, and something equivalent is created called MetaData space

    recent JVM implementation are using generational GC. which is minor GC on young generation (where there is not enough eden space to put new objects); major GC on tenured space (when there is not enough memory on Tenured space to put objects)

    There are multiple GC algorithms, like serial, paralel, ParNew, CMS, G1 etc. for young generation, most recent JVM (hotspot, IBM, JRocket) are using copying algo for GC on young generation, because more of the objects are actually not long lasting. So after objects created and put onto Eden, the first Young GC could have most of the objects cleared. while remaining (minority) survived would be COPIED to S1/To Survival space. At any time, there is always one S1/S2 survival space empty to be copied to.
    by default after 15 cycles (this can be changed by setting up the survival ratio as JVM parameter), (if the object has been surviving after 15 coping GC), it will then be promoted to tenured space.
    There are exceptions, if the object is too big to put onto eden of Survival space (eden to S1 to S2 is 8:1:1), it could promoted to Tenured space immediately.

    for GC on tenured space, before Java 8, for recent JVM, it’s using CMS. (which is good for concurrency, and first mark the object to be cleared, which then invokes the finalize method, for last try, if the object still not referenced (used to be using reference count, now new JVM using GC roots reach-ability), then the object would be swept.)

    note: JVM stack is for the stack for Java methods/stack; while native stack is for native (non-java) methods.

    0d5d2f77322891081e879206425b1b37

  • 6. class loading
  • in Java, it’s delegating Parent class loader to load the class. so unless the class cannot be loaded from bootStrap (JRM/lib), Extension (JVM/lib/ext), System class loader, and the parent class loader(could be EAR, then WAR, then JAR), then child class loader.

    20180130145726613
    this implementation would avoid the classCast exception, where different copies of same class loaded by different class loader, which is incompatible with each other.

    20180130115349725

  • 7. memory leak
  • There are two types of memory leak. if objects are created, and later no longer used, however cannot be GCed, these objects would continue to hog the memory. examples are, the objects are referred by static variables of some long lived classes (class are normally live in permGen/Meta space, and would live till JVM exits); or the objects are referred by collections (the object might no longer used, however, the collection still hold a pointer to the object); inappropriate implementation of the object, for example equals & hashCode. it could happens the object could be put on a hash bin/node/bucket, however, never able to be retrieved, as the equals method doesn’t comply with hashCode implementaion.

    another type of memory leakage, is object creation faster than GC. there are some framewoks/library/or wrong implementations, generating objects (per thread, per request, as proxy, as prototype) very fast, which GC is not able to catch up, could cause OOM.

  • 8. happens before
  • JVM could re-arranged order of code executions for better performance (JIT compiler optimization, for example grouping operations on same object together). however, for certain operations, the happens before must satisfy no matter how the code re-arranged.

    happens before, means the previous operation visible to following operation.

    volatile: write operation must happens before for the read operation. since volatile variables are directly operating on the main memory, so write of the variable would be visible for all subsequent reads.

    lock: unlock happens before unlock. only once the lock released (marker work removed the thread info), subsequent calls can then acquire the lock (put the new thread’s info onto the marker word).

    Transitivity: if A happens before B, B happens before C, then A happens before C.

    Thread start rule: the start method happens before every actions on the spawned thread.

  • 9. permgen vs metaspace
  • Before Java 8, it’s using PermGen (Permaneent Generation, the JVM Generational) to store those class, meta data info. PermGen was part of the JVM. so if too many classes initiated for example, it could cause the permGen space generate OOM. (String pools were also put into permGen before for some old HotSpot JVM, before Java 7.)

    ldFRR

    eP0SJ

    from Java 8, permGen has been removed, and it has allocate a new space called MateSpace (kind of like renaming) for storing same information. However, metaSpace now lives on the native memory, which literately wont throw OOM. (unless whole native memory for the JVM process used up, in which case OOM wont be thrown either as the JVM is killed.)

    JVM Default maximum PermGen size (MB) Default maximum Metaspace size
    32-bit client JVM 64 unlimited
    32-bit server JVM 64 unlimited
    64-bit JVM 82 unlimited

    permGen size could be adjusted same as other vm paramters, (like oss, xms, xmx. etc )-XX:PermSize=N, or -XX:MaxPermSize=N.
    note, if the permGen is set to small initially, it could require multiple increases (before reaching maxPermSize). each resizing would cause a FULL GC.
    This could happens during JVM start up, as it realizes more permGen (or xms, heap space) needed, it would busy resizing, which take/waste a lot time.
    so one important JVM tuning is to set up a proper permSize (same as xms), so that resizing, and full GC (expensive operations) could be minimized.

    same as permGene,
    -XX:MetaspaceSize –the initial of the Metaspace

    -XX:MaxMetaspaceSize –the maximum size of the Metaspace

    7d0Nu

  • 10. Java Memory Model (JMM)
  • JMM

    JMM_generation

    JMM_gc

    JMM_localCache

    JVM

  • 11. garbage collection (GC)
  • right now, most JVM (HotSpot, JRocket, etc) are using generational JVM memory modeling and GC.
    This is the default space allocation on heap.

    JVM_Generation

    if not enough space on new space (object too big to put onto eden), it could trigger minor GC. because most of the object are short lived, so minor GC is using copy algorithm, basically remove most objects (since most are short lived), and copy surviving objects into one of the survivor space (from/to , or s1/s2). default survival ration is 15. once the object has passed 15 minior gc and survived, it will be promoted to tenured space (like tenured). another case for promotion, is if the object is too big (even larger than S1/S2 space for example, so no way for them to stay on new space). (“too big too fail/to kill early”)

    on Tenure space, before java 8/9 (using G1), CMS is most popular. basically it’s concurrent version of sweep. if an object (GC roots not reachable), it would be marked (run the finalize method), still not reachable, then swept.

  • 12. program registers
  • PC (program counter register) is thread bound (live per thread), used for register current execution code line number, and direct for next running line (number).
    if it’s running java method, registered is the line number of the byte code.
    if it’s running native method, register is undefined.

    Pre increment operator is slightly performing better than inline and post increment operator

        public void postIncrementCost(){
            LocalTime pre = LocalTime.now();
            long start = System.currentTimeMillis();
            System.out.println("start postIncremental @==> "+ pre);
            int x = Integer.MIN_VALUE;
            for(int i=0; i< Integer.MAX_VALUE; i++) {
                while (x < Integer.MAX_VALUE) {
                    x++;
                }
                x = Integer.MIN_VALUE;
            }
            LocalTime post = LocalTime.now();
            System.out.println("end postIncremental @==> "+ post);
            long end = System.currentTimeMillis();
            System.out.println("postIncremental consumed @==> "+ (end - start));
        }
    
      
    
      public void preIncrementCost(){
            LocalTime pre = LocalTime.now();
            System.out.println("start preIncremental @==> "+ pre);
            long start = System.currentTimeMillis();
            int x = Integer.MIN_VALUE;
            for(int i=0; i< Integer.MAX_VALUE; i++) {
                while (x < Integer.MAX_VALUE) {
                    ++x;
                }
                x = Integer.MIN_VALUE;
            }
            LocalTime post = LocalTime.now();
            System.out.println("end preIncremental @==> "+ post);
            long end = System.currentTimeMillis();
            System.out.println("preIncremental consumed @==> "+ (end - start));
        }
    
     
    
       public void mathIncrementCost(){
            LocalTime pre = LocalTime.now();
            System.out.println("start mathIncremental @==> "+ pre);
            long start = System.currentTimeMillis();
            int x = Integer.MIN_VALUE;
            for(int i=0; i< Integer.MAX_VALUE; i++) {
                while (x < Integer.MAX_VALUE) {
                    x = x+1;
                }
                x = Integer.MIN_VALUE;
            }
            LocalTime post = LocalTime.now();
            System.out.println("end mathIncremental @==> "+ post);
            long end = System.currentTimeMillis();
            System.out.println("mathIncremental consumed @==> "+ (end - start));
        }
    

    start postIncremental @==> 15:47:41.764
    end postIncremental @==> 15:47:41.773
    postIncremental consumed @==> 9
    start preIncremental @==> 15:47:41.774
    end preIncremental @==> 15:47:41.782
    preIncremental consumed @==> 8
    start mathIncremental @==> 15:47:41.782
    end mathIncremental @==> 15:47:41.791
    mathIncremental consumed @==> 9

    debounce-promise

    the doc is really not clear.

    https://www.npmjs.com/package/debounce-promise

    https://npm.runkit.com/debounce-promise

    while truely, it turns out the debounce action needs to be PRE defined and evaluated.

    so only the function is pre-defined like this will work

    const preDefined = debounce((num)=> console.log("whatever action here", num), 100);
       [1, 2, 3, 4].forEach(num => {
         preDefined(num);
         });
    
    "whatever action here"
    4
    
    

    while calling debounce() directly, simply like direct method calls

      [1, 2, 3, 4].forEach(num => {
        debounce(console.log("always defining", num), 100);
        });
    
    "always defining"
    1
    "always defining"
    2
    "always defining"
    3
    "always defining"
    4
    

    react render empty

    according to react doc

    Booleans, Null, and Undefined Are Ignored

    falsenullundefined, and true are valid children. They simply don’t render. These JSX expressions will all render to the same thing:

    https://reactjs.org/docs/jsx-in-depth.html#booleans-null-and-undefined-are-ignored

    so

    
    
    class x extends Component{
         render(){
           return(null);
        }
    }
    
    
    

    good summary for react redux thunk

    source:
    https://medium.com/@stowball/a-dummys-guide-to-redux-and-thunk-in-react-d8904a7005d3

    1. There is 1 global state object that manages the state for your entire application. In this example, it will behave identically to our initial component’s state. It is the single source of truth.
    2. The only way to modify the state is through emitting an action, which is an object that describes what should change. Action Creators are the functions that are dispatched to emit a change – all they do is return an action.
    3. When an action is dispatched, a Reducer is the function that actually changes the state appropriate to that action – or returns the existing state if the action is not applicable to that reducer.
    4. Reducers are “pure functions”. They should not have any side-effects nor mutate the state — they must return a modified copy.
    5. Individual reducers are combined into a single rootReducer to create the discrete properties of the state.
    6. The Store is the thing that brings it all together: it represents the state by using the rootReducer, any middleware (Thunk in our case), and allows you to actually dispatch actions.
    7. For using Redux in React, the <Provider /> component wraps the entire application and passes the storedown to all children.

    https://codeburst.io/redux-a-crud-example-abb834d763c9

    Jersey filter/Interception binding

    looks like mainly 3 ways to define the filter/interceptor
    1. global binding
    by the @Provider annotation and implement the Client/ContaineRequest/ResponseFilter
    this would apply to all request/response

    2.named binding
    by create new annotation of @NameBinding, which annotate the customFilter and resources together to bind them

    3. Dynamic binding
    by implement the dynamicFeature, which then would check the resource and register/provide corresponding filters for that resource

    ref: https://dzone.com/articles/binding-strategies-for-jax-rs-filters-andintercept

    cannot set niceness : Permission denied

    tried to run hadoop start-dfs.sh several times, which always throw out an exception


    localhost: nice: cannot set niceness: Permission denied

    checking jps, the start-dfs.sh only brought up nameNode and resourceManager.

    Tried several approaches on set niceness with no avail. somehow actually calling the daemon to start the datanode directly works. (event the permission denied still exist.)


    lwpro2@DESKTOP-G92MK3N:~/hadoop/hadoop-2.9.1$ sbin/hadoop-daemon.sh start datanode
    starting datanode, logging to /home/lwpro2/hadoop/hadoop-2.9.1/logs/hadoop-lwpro2-datanode-DESKTOP-.out
    nice: cannot set niceness: Permission denied

    JPM’s machine learning guide

    https://news.efinancialcareers.com/sg-en/285249/machine-learning-and-big-data-j-p-morgan/?_ga=2.239710086.402978973.1510846546-181045137.1482637321

    1. Banks will need to hire excellent data scientists who also understand how markets work

    4. An army of people will be needed to acquire, clean, and assess the data

    5. There are different kinds of machine learning. And they are used for different purposes

    6. Supervised learning will be used to make trend-based predictions using sample data

    7. Unsupervised learning will be used to identify relationships between a large number of variables

    8. Deep learning systems will undertake tasks that are hard for people to define but easy to perform

    9. Reinforcement learning will be used to choose a successive course of actions to maximize the final reward

    10. You won’t need to be a machine learning expert, you will need to be an excellent quant and an excellent programmer

    Accuracy for text classification

    The classifier aglo is more or less un-changed since almost 10 years. I think a good reason for that could be, for example, NB, SVM have been able to achieve relatively high accuracy since long time back, provided with optimal/sub-optimal parameters.

    While at the same time, a good approach to bump up the accuracy of overall text classification result is by data/corpus preparation, including stopwords, POS,TF-IDF etc, based on my experience.

    Saw a good post on accuracy of text classification, echoing this:

    6 Practices to enhance the performance of a Text Classification Model

    Supervised machine learning

    libsvm is the first supervised machine learning library i have used extensively, more than 10 years back.

    It was pretty awesome that time back, seeing a 78% text classification accuracy of against more than 100,000 hotel reviews, i have crawled from ctrip.com.

    While, at version 3, they are able to achieve 96.875% for text classification results now, as:

    Click to access guide.pdf

    https://www.csie.ntu.edu.tw/~cjlin/libsvm/

    Play around with chrome headless

    source code

    https://github.com/lwpro2/ChromeHeadless

    1. get the chrome remote interface package
    npm install chrome-remote-interface

    2. run the code

    cdp({
    ..
        },
        async client => {
    
        let {data} = await Page.captureScreenshot({
            format: 'png',
        });
    
    });
    

    and even easier if using puppeteer

        const browser = await puppeteer.launch();
        const page = await browser.newPage();
    
        await page.goto('https://lwpro2.wordpress.com');
        await page.screenshot({ path: 'blog.png' });
    
    

    Not able to query on geode server

    using gfsh to query, always returned me “could not create an instance of class xxx”, or some serialization/deserialization error.

    while from client side, both persist and retrieving worked fine.

    hope it’s not a bug of the compatibility, but not sure what’s the reason at the moment.

    =============================

    finally sort out the issue, need to configure

    read-serialized 
    

    on the server side before starting the server

    For a starter

    convert number to English words

    /**
      * @author lwpro
      * @since 10/17/2017
      * @version 1
      */
    object NumberTranslator extends App {
    
      def translateSingle(num: Int): String = {
        num match {
          case 0 => "zero"
          case 1 => "one"
          case 2 => "two"
          case 3 => "three"
          case 4 => "four"
          case 5 => "five"
          case 6 => "six"
          case 7 => "seven"
          case 8 => "eight"
          case 9 => "nine"
        }
      }
    
        def translateDouble(num: Int): String = {
    
          num match {
            case 10 => "ten"
            case 11 => "elven"
            case 12 => "twelve"
            case 13 => "thirteen"
            case 14 => "fourteen"
            case 15 => "fifteen"
            case 16 => "sixteen"
            case 17 => "seventeen"
            case 18 => "eighteen"
            case 19 => "nineteen"
            case 20 => "twenty"
            case x if 21 until 30 contains x => "twenty " concat (translateSingle(x - 20))
            case 30 => "thirty"
            case x if 31 until 40 contains x => "thirty " concat (translateSingle(x - 30))
            case 40 => "forty"
            case x if 41 until 50 contains x => "forty " concat (translateSingle(x - 40))
            case 50 => "fifty"
            case x if 51 until 60 contains x => "fifty " concat (translateSingle(x - 50))
            case 60 => "sixty"
            case x if 61 until 70 contains x => "sixty " concat (translateSingle(x - 60))
            case 70 => "seventy"
            case x if 71 until 80 contains x => "seventy " concat (translateSingle(x - 70))
            case 80 => "eighty"
            case x if 81 until 90 contains x => "eightty " concat (translateSingle(x - 80))
            case 90 => "ninety"
            case x if 90 until 100 contains x => "ninety " concat (translateSingle(x - 90))
          }
        }
    
    
        def translateBlock(num: Int) = {
          num match {
            case x if 0 until 10 contains x => translateSingle(num)
            case x if 10 until 100 contains x => translateDouble(num)
            case x if (100 until 1000 contains x) && (x %100 == 0) => translateSingle(num / 100) concat " hundred"
            case x if x % 100 < 10 => translateSingle(num / 100) concat " hundred and " concat (translateSingle(num % 100) )
            case _ => translateSingle(num / 100) concat " hundred and " concat (translateDouble(num % 100) )
          }
        }
    
      for (i <- 0 until 1000)
        println( i.toString concat("::") concat translateBlock(i))
    
    
      def translateWhole (num: Int) = {
        num toString() length  match {
          case x if 0 until 3 contains x => translateBlock(num)
          case x if 4 until 6 contains x => translateBlock(num / 1000) concat("thousand and ") concat(translateBlock(num %1000))
          case x if 7 until 9 contains x => translateBlock(num / 1000000) concat("million and ") concat translateBlock(num % 1000 /1000) concat("thousand and ") concat(translateBlock(num %1000 /1000 % 1000))
        }
      }
    
      }
    
    
    

    Bye, thread safety on java Date & SimpleDateFormat

    I used to be pulled into various issue troubleshooting. Several occurrences are related to SimpleDateFormat & java.util.Date from various different developers and teams.

    It’s not rocket science for most of experienced developers. But for some new programmer, it is one of the most messed up places.

    Now Java 8 is moving these to immutable. Java 8 java.time core date related classes are immutable:

    http://www.oracle.com/technetwork/articles/java/jf14-date-time-2125367.html

    (I guess Scala’s popularity nowadays should not be a surprise.)

    Another angle of view: imperative/procedural vs functional/declarative

    quote
    https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/functional-programming-vs-imperative-programming
     

    Transitioning for OOP Developers
    In traditional object-oriented programming (OOP), most developers are accustomed to programming in the imperative/procedural style. To switch to developing in a pure functional style, they have to make a transition in their thinking and their approach to development.
    To solve problems, OOP developers design class hierarchies, focus on proper encapsulation, and think in terms of class contracts. The behavior and state of object types are paramount, and language features, such as classes, interfaces, inheritance, and polymorphism, are provided to address these concerns.
    In contrast, functional programming approaches computational problems as an exercise in the evaluation of pure functional transformations of data collections. Functional programming avoids state and mutable data, and instead emphasizes the application of functions.

    Image vs Containers for Docker

    Not many developers explain technologies clearly, either intentionally or incapably. the post here however is one exception:

    from: http://blog.codesupport.info/docker-images-vs-containers/

     

    IMAGE :- An image is an inert, immutable, file that’s essentially a snapshot of a container. Images are created with the build command, and they’ll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com. Because they can become quite large, images are designed to be composed of layers of other images, allowing a miminal amount of data to be sent when transferring images over the network.
     CONTAINER :- To use a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are hopefully why you’re using Docker; they’re lightweight and portable encapsulations of an environment in which to run applications.

    The RPC didn’t feel so long ago

    It didn’t seems like a century old ago, when the stub and skeleton was widely used, and WSDL2Java & Java2WSDL was pretty convenient and “cool”:

    1. A Java program executes a method on a stub (local object representing the remote service)
    2. The stub executes routines in the JAX-RPC Runtime System (RS)
    3. The RS converts the remote method invocation into a SOAP message
    4. The RS transmits the message as an HTTP request

    it’s now an Optional in Java EE7, but good to see it’s still there.

    https://java.net/projects/jax-rpc/

    Chrome App with AnguarJS & JQuery

    There are mainly 3 parts to build an chrome app:

    Static Page

    1. popup.html, would be the popup view when you click on the icon of the app/plugin

    here you define the view, and the binding to the controller/service functions

    <div ng-controller="shortCutController" ng-model="option" class={...}
    
    />
    <button id="configure" class="btn btn-info" ng-click="option = !option; initMap()">{{ option ? "Hide": "Show"}} Configure</button>

    app.js

     

    1. which is the place to put your controller/services:
    app.controller( 'shortCutController', ['$scope', '$rootScope', 'mapStore', function( $scope, $rootScope,mapStore ) {
    }
    I am using chrome storage, so did a sync on each update
    $scope.update=function(key, value) {//update the storage
    
    //mapStore.update(key, value);
    
    $rootScope.mappers[key] =value;
    
    chrome.storage.sync.set({"mapper":$rootScope.mappers}, function(){
    
    console.log("check the updated mappers: "+$rootScope.mappers);
    
    });
    
    
    

     

    background.js

    this is the place, for your plugin to listen to registered actions. for this case, it’s listening omnibox:

    chrome.omnibox.onInputEntered.addListener(
    
    function(text) {
    
    vardata= {};
    
    chrome.storage.sync.get("mapper",function(keys){
    
    ......
    
    chrome.tabs.query({
    .............
    }}

     

    Here is the link for the app: https://chrome.google.com/webstore/detail/short-cut-for-url-mapper/lafchflokhmpcoaondfeffplkdnoaelh

    Android React native app

    Have played around react native, and published an android app earlier.

    some thoughts:

    1. navigator

      this would normally be the entry point, which define the landing page, and how routing to different pages works.

    
    
    <Navigator
        initialRoute={{ title: 'Tap Scene', id: 'default' }}
        renderScene={this.navigatorRenderScene}
    />

     

    Scene

    2. For each scene, its a Component, where the life cycle of constructor, componentDidMount, componentUnmount, can be used to trigger specific actions. on each scene, you can use JSX to create the page/view, which could link to functions, like normal js binding:

    onActionSelected
    onIconClicked
    handleClick() {
        if(!this.state.favnumber)
            LocalToastAndroid.show('Please set up your favorite number first', LocalToastAndroid.SHORT);
        else
            LocalToastAndroid.call(this.state.favnumber);
    };

    Additional actions

    3. native react native modules wont cover all functions you request, to create custom module, you can register additional

    ReactContextBaseJavaModule

    then exposing the method to JS

    @ReactMethod
    public void call(String number){
    .....
    }
    module.exports = NativeModules.CustomizedModule;

    here is the app: https://play.google.com/store/apps/details?id=com.best2lwjj

    AI for system support

    Have tried to build an AI bot since almost 3 years back, finally did a prototype, in case anybody would like to do something similar:

    Technologies:

    Java, Spring Boot, Spring, SQLlite, PostGre, Scala, Python, Anaconda, Scikit Learn,  EWS, BootStrap, AngularJS/JQuery/HTML/CSS, Symphony API, Cisco API,

    Components:

    Data Set

    1. I have built a scala web crawler, to download all historical support issues.
    2. at the same time, have manually cleaned up/read through each of the thousand of support issues, put in corresponding resolutions corresponding to each
    AI
    1. have leveraged on anaconda & scikit learn, to NLP, to tokenize each support issue (text), remove stop words, stemmed each, remove punctuations
    2. have leveraged on anaconda & scikit learn, bag each token of the text as feature vs class, to feed into linear regression classifier, tried SLDA, so far working at 72% accuracy
    AI Exposer
    1. have exposed AI as a service
    Issue Feeder
    1. have leveraged EWS to read in all issues, post to AI service
    UI
    1. have built a web user interface, on top of HTML5 + JQuery + Bootstrap, to show the support emails + AI responded resolutions
    2. have a option on UI, to provide user feedback to AI, to keep its intelligence updated
    Notifier
    1. leverage on Java Mail API, EWS, Chat API, phone API, to post alerts for critical issues

    Datetime issue

    The nasty datetime format issue seems like is universal. In c# world, depends on the locale, it would format to string and parse to datetime differently and could wrongly as well.

    For example, for cultureinfo (“ja-JP”), datetime.tostring() could be yyyy/MM/d format which not recognized by a lot other systems or places.
    While for same locale, it would parse 04Apr16 as 2004-04-16.

    To solve and avoid the issues, invariantculture should be the cure. Always use invariantculture for parsing and for formating:
    Datetime.parseExact(dateString, format, invariantculture).tostring(format, invariantculture).

    Unbrick LG G3

    If you are a LG G3 user, you might be tempted to update to marshmallow, seeing the updates available through LG PC suite. Please don’t !

    If it happened you already did, you might now end up in a situation like a lot other LG G3 users, where the phone would bricked at bootloop. Basically, the phone would infinitely restart. However, it never restart completely, where it just keep relaunching till the boot screen, then off and reboot till the boot screen again.

    If you are already in above situation, my experience here might help. I have encountered this during the weekend, and almost gave up to customer service centre for last resolt. Then somehow below trial worked !

    (Leave the power button aside, you never need to touch that button during this whole procedure.)

    1. power off the phone by removing the battery
    2. with the phone still off, plug the usb, connect the phone with PC
    3. with LG PC suit opened on the PC, and the phone still off, just hold the Power UP button for few seconds, this move the phone into download mode
    4. At LG PC Suite, Under “Mobile Devide” menu, there is this option called “Restore upgrade errors”. click it, it will try to connect the phone, download the patch to revert previous update. wait till it finish, your phone would be auto restarted.

    In case above steps didn’t work out on first try, try another one or twice. It could have patched multiple times, if you phone is moving from lower lollipop version.

    Let me know if it helps !

    Raspberry PI as NAS

    Just finished setting up my RP 2 as a NAS. Here is the summary list to achive that:

    Hardware
    1. Raspberry PI 2
    2. USB hard disk
    3. HDMI cable
    4. TV
    5. USB keyboard and Mouse
    6. Internet cable
    7. SD Card

    OS
    I have tried Raspberry PI official raspbian, which turns out extremely laggy. Then switched to Ubuntu 14 LTS, which doesn’t work well on RP 2 directly.
    Finally installed ubunut mate, which works very smooth and almost perfect.

    One thing to note to install ubuntu mate on RP 2, format the SD card to ext4 format would be the easiest approach.

    NAS
    to have a full functional NAS, i have installed and set up using below:
    1. miniDLNA
    pretty light and straight forward set up, just include the folder you would like to DLNA, and have the app auto start
    2. USB hard disk normally have issue with direct power up from Rp 2, due to the lmited power from the usb port. a lot suggested to use usb hub, while for myself, a dual usb connector worked very well
    3. mount the hard disk, for my case, is mounted to /media
    4. there are several vnc servers, i have used vnc4server, pretty light and worked perfectly
    5. enable SSH
    with above two steps, keyboard, mouse and Display for RP is seldom in necessary
    6. set up samba
    add the shared directory, on /etc/samba/smb.conf
    7. auto download
    i have set up aria2c, but seldom used. as there is a chinese software, Thunder remote works pretty nice

    PS. from your home computer, if you have xming installed, which could mostly replace the need for vncviewer as well

    JIT hotspot compiler

    The Java JIT compiler compiles Java bytecode to native executable code during the runtime of your program. This increases the runtime of your program significantly. The JIT compiler uses runtime information to identify part in your application which are runtime intensive. These so-called “hot spots” are then translated native code. This is the reason why the JIT compiler is also called “Hot-spot” compiler.

    JIT is store the original bytecode and the native code in memory because JIT can also decide that a certain compilation steps must be revised.

    Java volatile again

    http://en.wikipedia.org/wiki/Volatile_variable

    The Java programming language also has the volatile keyword, but it is used for a somewhat different purpose. When applied to a field, the Java volatile guarantees that:
    
        (In all versions of Java) There is a global ordering on the reads and writes to a volatile variable. This implies that every thread accessing a volatile field will read its current value before continuing, instead of (potentially) using a cached value. (However, there is no guarantee about the relative ordering of volatile reads and writes with regular reads and writes, meaning that it's generally not a useful threading construct.)
        (In Java 5 or later) Volatile reads and writes establish a happens-before relationship, much like acquiring and releasing a mutex.[9]
    
    Using volatile may be faster than a lock, but it will not work in some situations.[citation needed] The range of situations in which volatile is effective was expanded in Java 5; in particular, double-checked locking now works correctly.[10]
    

    http://en.wikipedia.org/wiki/Happened-before

    In computer science, the happened-before relation (denoted: \to \;) is a relation between the result of two events, such that if one event should happen before another event, the result must reflect that.
    

    Fail fast vs fail safe

    fail fast, throw exception if any concurrent modification.

    fail safe, work on seperate/clean copy for modification, dont throw exception.

     java.util.concurrent.CopyOnWriteArrayList<E>
    
    A thread-safe variant of java.util.ArrayList in which all mutative operations (add, set, and so on) are implemented by making a fresh copy of the underlying array. 
    
    This is ordinarily too costly, but may be more efficient than alternatives when traversal operations vastly outnumber mutations, and is useful when you cannot or don't want to synchronize traversals, yet need to preclude interference among concurrent threads. The "snapshot" style iterator method uses a reference to the state of the array at the point that the iterator was created. This array never changes during the lifetime of the iterator, so interference is impossible and the iterator is guaranteed not to throw ConcurrentModificationException. The iterator will not reflect additions, removals, or changes to the list since the iterator was created. Element-changing operations on iterators themselves (remove, set, and add) are not supported. These methods throw UnsupportedOperationException. 
    
    All elements are permitted, including null. 
    
    Memory consistency effects: As with other concurrent collections, actions in a thread prior to placing an object into a CopyOnWriteArrayList happen-before actions subsequent to the access or removal of that element from the CopyOnWriteArrayList in another thread. 
    
    This class is a member of the Java Collections Framework.
    
    

    Fail-Safe vs Fail-Fast Iterator in Java Collections
    Fail-Safe Iterator (java.util.concurrent – ConcurrentSkipListSet, CopyOnWriteArrayList, ConcurrentMap)

    Fail-safe Iterator is “Weakly Consistent” and does not throw any exception if collection is modified structurally during the iteration. Such Iterator may work on clone of collection instead of original collection – such as in CopyOnWriteArrayList. While ConcurrentHashMap’s iterator returns the state of the hashtable at some point at or since the creation of iterator. Most collections under java.util.concurrent offer fail-safe Iterators to its users and that’s by Design. Fail safe collections should be preferred while writing multi-threaded applications to avoid conurrency related issues. Fail Safe Iterator is guaranteed to list elements as they existed upon construction of Iterator, and may reflect any modifications subsequent to construction (without guarantee).

    Fail-Fast Iterator (java.util package – HashMap, HashSet, TreeSet, etc)

    Iterator fails as soon as it realizes that the structure of the underlying data structure has been modified since the iteration has begun. Structural changes means adding, removing any element from the collection, merely updating some value in the data structure does not count for the structural modifications. It is implemented by keeping a modification count and if iterating thread realizes the changes in modification count, it throws ConcurrentModificationException. Most collections in package java.util are fail-fast by Design.

    Observor and Observable

     java.util.Observable
    
    This class represents an observable object, or "data" in the model-view paradigm. It can be subclassed to represent an object that the application wants to have observed. 
    
    An observable object can have one or more observers. An observer may be any object that implements interface Observer. After an observable instance changes, an application calling the Observable's notifyObservers method causes all of its observers to be notified of the change by a call to their update method. 
    
    The order in which notifications will be delivered is unspecified. The default implementation provided in the Observable class will notify Observers in the order in which they registered interest, but subclasses may change this order, use no guaranteed order, deliver notifications on separate threads, or may guarantee that their subclass follows this order, as they choose. 
    
    Note that this notification mechanism is has nothing to do with threads and is completely separate from the wait and notify mechanism of class Object. 
    
    When an observable object is newly created, its set of observers is empty. Two observers are considered the same if and only if the equals method returns true for them.
    
    
    java.util.Observer
    
    A class can implement the Observer interface when it wants to be informed of changes in observable objects
    
    

    ajax the file input

    I am trying to submit multiple forms on same page at same time; with one of them being a file input.

    The key is to wrap the form, using a new FormData.

      $.ajax({
        url: 'saveAttachment.do',
        type: 'POST',
        data: new FormData($('#saveAttachment')[0]),
        async: false,
        cache: false,
        contentType: false,
        processData: false,
        success: function (returndata) {
          alert(returndata);
        }
      });
    

    subresource and prefetch

    both seems would try cache first, then http request if not yet cached.

    At the same time, subresource could result in duplicate http retrieval, if same resources (eg. image) initiated again from another (eg.css). Because, subresource always force to download.

    While, as prefetch is low priority, since it’s supposed for next page. So it waits for current page loading finish. And prefetch could be cancelled by subresource or any other initiation of same resource.

    for example, with below source code:

    <!-- below would be reinitiated/requested by css later -->
    <link rel="subresource" href="images/header_brs.jpg" />
    <link rel="subresource" href="images/float_bar_lg.gif" />
    <link rel="prefetch" href="images/grad.jpg" />
    <link rel="prefetch" href="images/brs_C.jpg" />
    <!-- JS for next page -->
    <link rel="prefetch" href="js/summaryPage.js" />
    <link rel="subresource" href="js/summaryPage.js" />

    it would render as below

    prefetch_brs_C_jpg

    prefetch_grad_jpg

    prefetch_subresource_next_page_summaryPage_js

    subresource_float_bar_lg_gif

    subresource_header_brs_jpg

    JVM 7

    http://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-VM/html/memleaks.html

    3.1.1 Detail Message: Java heap space

    The detail message Java heap space indicates that an object could not be allocated in the Java heap. This error does not necessarily imply a memory leak. The problem can be as simple as a configuration issue, where the specified heap size (or the default size, if not specified) is insufficient for the application.

    In other cases, and in particular for a long-lived application, the message might be an indication that the application is unintentionally holding references to objects, and this prevents the objects from being garbage collected. This is the Java language equivalent of a memory leak. Note that APIs that are called by an application could also be unintentionally holding object references.
    

    3.1.2 Detail Message: PermGen space

    The detail message PermGen space indicates that the permanent generation is full. The permanent generation is the area of the heap where class and method objects are stored. If an application loads a very large number of classes, then the size of the permanent generation might need to be increased using the -XX:MaxPermSize option.

    Interned java.lang.String objects are no longer stored in the permanent generation. The java.lang.String class maintains a pool of strings. When the intern method is invoked, the method checks the pool to see if an equal string is already in the pool. If there is, then the intern method returns it; otherwise it adds the string to the pool. In more precise terms, the java.lang.String.intern method is used to obtain the canonical representation of the string; the result is a reference to the same class instance that would be returned if that string appeared as a literal.

    When this kind of error occurs, the text ClassLoader.defineClass might appear near the top of the stack trace that is printed.