hot reload apollo gateway

it’s not unusual that there are cases we have new graphql servers to be added, or existing graphql servers updated.

Apollo server at the moment, have a gateway component which could statically maintain and route the right traffic to the right server and aggregate results.

However, this community version of Apollo server and gateway is not able to cater for the case in the beginning, where we would like to hot update/reload the gateway with any updates however with shut down the servers.

After some trials and errors, this is the work around to sort this out:

instead of hardcode the service list (apollo graphql servers), maintain it separately and dynamically:

#### instead of this
const gateway = new ApolloGateway({
  serviceList: [
    { name: "astronauts", url: "http://localhost:4002" },
    { name: "missions", url: "http://localhost:4003" }
  ]
});

## maintain this list
#### instead of this
const dynamicServiceList = [
    { name: "astronauts", url: "http://localhost:4002" },
    { name: "missions", url: "http://localhost:4003" }
  ];

then from apollo gateway, instead of spoon feeding the list, switch it to dynamically load the definitions

const gateway = new ApolloGateway({
  serviceList: [],
...
  experimental_updateServiceDefinitions: async (serviceList) => {
    return {
      isNewSchema: true,
      serviceDefinitions:  await Promise.all(dynamicServiceList.map(async (service) => {
//load the sql
        const data =  await request(service.url, sdlGQL);
...
        return {
...
//then feed the data here
          name: service.name,
          url: service.url,
          typeDefs: parse(data._service.sdl)
        };
      }))
    };
  }
});

At the same time, create a new endpoint, if needed, to take in updated servers, or new servers, so that to update the dynamic list

app.get('/hot_reload_schema/', async (req, res) => {
//get the new server info from req
....
      const status = await validateSDL(name, url);
      if (status){
//update the dynamic list
        dynamicServiceList.push({name, url});
        res.send('The new schema has been successfully registered');
      }
      else {
        res.send('Please check the log for the error details while registering this schema');
      }
    }
);

lastly, make sure the gateway is polling the service list at some intervals:

new ApolloGateway({
  pollingTimer: 100_000,
  experimental_pollInterval: 100_000,
....
});

alternatively, this could also be updated by an explicit load:

new ApolloGateway({
....
}).load();

have contributed the solution to the community here.

commonJS vs ES Module

for nodeJS, the server side is by default using commonJS for moduling.

//export data data.js
exports.staffs = [{
   name: "Bush",
   id: 1,
}, 
{
   name: "Forest",
   id: 2
}
];

//default export from module, default.js
module.exports = {name: "Gump"};

//then corresponding import
const { staffs, .... } = require('./data')
const anything = require('./default')


while client side is using ES modules:

//export
export const log = winston.createLogger({....});

//import 
import log from './logging/logger';

infinite redirects with lua-resty-openidc

lua-resty-openidc is a certified OIDC and OAuth library built onto openresty. While openresty is a reverse proxy built on nginx with lua and luaJit embedded, which greatly upgrade nginx’s capability.

lua-resty-openidc is able to authenticate and authorize the client with compliant OP (keycloak in my case). However, I was facing issues with infinite redirects:

    location /test {

      access_by_lua_block {

        local opts = {
          discovery = "http://keycloak/...../.well-known/openid-configuration",
          redirect_uri_path = "/test",
          accept_none_alg = true,
          client_id = "xxxx",
          client_secret = "xxxxx",
          use_nonce = true,
          revoke_tokens_on_logout = true,
        }

        local res, err, url, session = require("resty.openidc").authenticate(opts)

        if err or not res then
        ngx.status = 403
        ngx.say(err and err or "no access_token provided")
        ngx.exit(ngx.HTTP_FORBIDDEN)
        end
      }
      default_type text/html;
      content_by_lua 'ngx.say("<p>hello, world here from test</p>")';
    }

for above block, I was expecting the library able to direct the client to keycloak authentication at first time, then subsequently redirect back to the redirect_uri /test, which then see the client is already authenticated, and proceed to the content_by_lua` block.

however, instead, it’s facing a infinite redirect between keycloak and redirect_url: https://github.com/zmartzone/lua-resty-openidc/issues/32#issuecomment-656035986

the final solution is to put the control block (access_by_lua) after location /, then worked out

===================================================

a follow up to the original post, the redirect_uri itself could be causing the issue. Instead of pointing it to a final landing page, point it to a intermittent place which would then be directed to the original place (the protected location) should sort the problem as well.

https://github.com/zmartzone/lua-resty-openidc/issues/343

issue with embedded code in google site

I was facing some issue when embedding the code into google site:

turns out google was possibly running some checks, maybe the traffic while the code is added, and with some activities from any add-on or chrome itself, google site think it’s suspicious and blocked it.

the solution was to run chrome in incongnito mode, it will work fine.

nginx nested locations

It was unintuitive and the nginx doc doesn’t put it clearly, http://nginx.org/en/docs/http/ngx_http_core_module.html#location.

for nested blocks, for example

    location /protected {
      access_by_lua_block {
        local opts = {
          redirect_uri = "/auth_redirect",
...
          use_nonce = true
        }
...
      }
      default_type text/html;
      content_by_lua 'ngx.say("<p>Protected</p>")';


      location /protected/api {
          ...
      }

      location /auth {
          ...
      }
...
}

the URL, http://{hostname}/protected/auth wont work

while at the same time, the URL http://{hostname}/protected/api would direct correctly.

as the nested location is still following the same pattern or prefix matching, without any url concatenation.

chrome “Managed by your organization”

Problem:

There is this weird message on chrome, saying

and at chrome://management, it says

turns out, this is due to some deprecated extensions.

Solutions:

To sort out the issue, on Windows, go to regedit, then search for any suspicious chrome registry:

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome
HKEY_CURRENT_USER\SOFTWARE\Policies\Google\Chrome

for my computer, there is this deprecated EnablePluginPolicy, and with registry of AliSSO and AliWangwang.

After clearing these registry, and restarted chrome. The weird warnings are all cleared.

chrome://policy/ is clean

weird issue with pv, pvc, efs, csi, pod stuck at EKS

have been facing various different issues with one EKS cluster,

  • the pods is stuck at failedMount, even though the PV and PVC has been in bound state, and the PVC also shows bound to the specific pods, the EFS on aws is also in good state
## relevant error message
list of unattached/unmounted volumes

Warning  FailedMount            1m (x3 over 6m)  kubelet, .....  Unable to mount volumes for pod "mypod1_test(c038c571-00ca-11e8-b696-0ee5f3530ee0)": timeout expired waiting for volumes to attach/mount for pod "test"/"mypod1". list of unattached/unmounted volumes=[aws]


Normal   Scheduled    5m45s                default-scheduler                                       Successfully assigned /test3 to ....
  Warning  FailedMount  83s (x2 over 3m42s)  kubelet, ...  Unable to mount volumes for pod "test3_....(fe03d56e-aa26-11ea-9d9c-067c9e734f0a)": timeout expired waiting for volumes to attach or mount for pod ""/"test3". list of unmounted volumes=[mypd]. list of unattached volumes=[mypd default-token-mznhh]
 
  • related, it shows the efs csi driver not available
aws ebs csi driver 

Warning  FailedMount  9m1s (x4 over 15m)     kubelet, ....Unable to mount volumes for pod "test3_...(f0d65986-aa27-11ea-a7b2-022dd8ed078a)": timeout expired waiting for volumes to attach or mount for pod ""/"test3". list of unmounted volumes=[mypd]. list of unattached volumes=[mypd default-token-mznhh]
  Warning  FailedMount  2m35s (x6 over 2m51s)  kubelet, ... MountVolume.MountDevice failed for volume "scratch-pv" : driver name efs.csi.aws.com not found in the list of registered CSI drivers
  Warning  FailedMount  2m19s                  kubelet, ....  MountVolume.SetUp failed for volume ".." : rpc error: code = Internal desc = Could not mount "fs-43b99802:/" at "/var/lib/kubelet/pods/f0d65986-aa27-11ea-a7b2-022dd8ed078a/volumes/kubernetes.io~csi/scratch-pv/mount": mount failed: exit status 1
Mounting command: mount
Mounting arguments: -t efs fs-43b99802:/ /var/lib/kubelet/pods/f0d65986-aa27-11ea-a7b2-022dd8ed078a/volumes/kubernetes.io~csi/scratch-pv/mount
Output: Failed to resolve "fs-43b99802....amazonaws.com" - check that your file system ID is correct.
See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail.
  • pods stuck at termination state, even thought the svc, deployment, and rs has been killed, the pod still has been stuck at termination

the final solution to sort out all is to reboot the EKS worker node

helm upgrade issue on AWS EKS

Have been continuous receiving this error message during the `helm upgrade` even with several retry

UPGRADE FAILED
Error: kind StorageClass with the name "..." already exists in the cluster and wasn't defined in the previous release. Before upgrading, please either delete the resource from the cluster or remove it from the chart
Error: UPGRADE FAILED: kind StorageClass with the name ".." already exists in the cluster and wasn't defined in the previous release. Before upgrading, please either delete the resource from the cluster or remove it from the chart

Have tried two solutions, (the 1st doesn’t work out)

  1. as the helm upgrade error message, I did a helm delete the storageclass, then run the helm upgrade, even though on the internet some claim this work, but actually no, it fails again with same error message
  2. did a helm rollback to a previous working release, then run the helm upgrade again, then worked
helm history <release>
helm rollback <release> <a-previous-working-revision>
helm upgrade --install <release> .....

At the same time, for helm 2, all helm release information are stored in configmap

## use this command to get the release info
kubectl get  configmaps -n kube-system <release> -o jsonpath='{.data.release}' | base64 -d | gzip -cd

as for helm 3, it’s stored in secrets.

Upgrade your browser experience

Microsoft Edge/Opera/Google Chrome has become the de-facto browser for internet. This suite here will greatly further enhance your browsing experience:

Tab & URL manager

Problem statement:

we tend to have so many tabs open, it becomes difficult to find the specific tabs among so many we have opened.

Instead, we normally tend to just open a new tab which has already been open and hidden somewhere, this will additionally consume more of our computer memory usage.

We have visited certain pages before, but really cannot recall the complete address of the page.

We are trying to search for a term, however, would like to see only the top suggestions.

We might have different search engines would like to leverage on, for example maybe bing for desktop images, or baidu for chinese languages.

We have a limited screen, would like to maximize the space screen, instead of leaving the address and tab to consume the precious screen space.

Solution:

Install the Tab & URL manager here, 
https://chrome.google.com/webstore/detail/tab-url-manager/egiemoacchfofdhhlfhkdcacgaopncmi

Use the shortcut to open the Tab & URL manager
Windows: Alt+L

Mac: ⌥+L

In addition, you can configure further keyword mapping in the Extension Options.
* type a URL followed by enter to visit the specified address
* type a term followed by enter to search the term (the search engine is configurable)
* type a term to navigate to any popular search suggestions
* type a keyword to navigate to any mapped URL (the map is configurable)
* type a term to navigate to any open tabs
* type a term to get the likely result you would like to visit

ShortCut for URL mapper

Problem Statement:

It’s always difficult to remember the long URLs. Yet, we could have so many pages we normally visit.

Solution:

Install the ShortCut for URL mapper here, 

https://chrome.google.com/webstore/detail/short-cut-for-url-mapper/lafchflokhmpcoaondfeffplkdnoaelh

Click the extension, to configure the keyword to URL mappings.

After that, just type sc in the address bar, followed by the keyword would bring you to the targeted page.

My Tabs

Problem Statement:

We all have list of often visited pages each. Normally we would either bookmark them, then click one by one to open each, or just open a new tab each and type the URLs manually.

This is very tedious and waste our precious time.

Solution:

Install MyTabs here
https://chrome.google.com/webstore/detail/mytabs/mjapfgokeoeigopkkjnlkhgpcgoaoenf
Click the extension, to configure the often visited pages.

Then each time auto load them all with one click Alt+Shift+T, or if using Mac, Option+Shift+T.

Datadog APM tracing and logging

Recently, I have set up my micro services application onto datadog APM with tracing and logging.

something like,


so that we can follow through, for example, any http request or user_id from HTTP request, controller, service, authentication, database, and etc, with the same tracing ID and logs detailed.

Here are the steps I have followed:

  1. Set up datadog account and access, so that we will get the datadog API key, which is to be used to upload the data to datadog server
  2. install the datadog agent onto the server, we are using kubernetes on AWS (EKS) here, and have installed it as a daemonset (so that we have one for each node) with terraform.

set up the necessary environment variable in the daemonset

and expose the necessary ports

3. from the python services, leverage on libraries, for example, https://pypi.org/project/JSON-log-formatter/0.1.0/, to log the messages into JSON format

format = "%(asctime)s %(levelname)s [%(name)s]-- '[dd.trace_id=%(dd.trace_id)s dd.span_id=%(dd.span_id)s] ' -- %(message)s"
json = jsonlogger.JsonFormatter(format)

handler = logging.StreamHandler()
handler.setFormatter(json)

so that these logs would be in JSON format, and datadog would be able to parse the logs into tags & attributes.

4. then configure the datadog tracing


from ddtrace import config, patch_all, tracer


def setup_datadog():
    tracer.configure(
        hostname=os.environ["DD_AGENT_HOST"],#set up in the deployment.yaml, the env attribute
    )
    tracer.set_tags({"env": "production"})
    config.flask["service_name"] = "checkout-service" #this could also set up in the deployment.yaml, the env attribute
    config.flask["analytics_enabled"] = True
    config.flask["extra_error_codes"] = [401, 403]
    patch_all()

5. finally, then annotate the entry point function, with

@tracer.wrap(service="checkout-service")

sample configuration in the deployment.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
...
  annotations:
    ad.datadoghq.com/checkout.logs: '[{"source":"python", "service": "checkout-service"}]'
spec:
  replicas: 1
...
  template:
    metadata:
    spec:
      containers:
      -  name: checkout
...
        env:
          - name: DD_AGENT_HOST
            valueFrom:
              fieldRef:
                fieldPath: status.hostIP
          - name: DD_SERVICE
            value: checkout-service
          - name: DD_TRACE_ANALYTICS_ENABLED
            value: "true"
...
      restartPolicy: Always
status: {}

Persistence Volume Stuck at terminating

Sometimes, PV and PVC could stuck at terminating. This is because there is a finalizer to protect the PV and PVC termination while there are still possible usage.

For example, when I am trying to delete the PV

both PV and PVC however are stuck for quite a while

the resolution is to edit and remove the finalizers:

then without the finalizers, it will let the PV and PVC termination.

kubernetes endpoints

endpoints is normally used behind the scene, without the need of any manual intervention.

However, for local testing, it could become helpful to leverage on the endpoints customization.

---	
  kind: "Service"
  apiVersion: "v1"
  metadata:
    name: "svc-to-external-web"
  spec:
    ports:
      - name: "apache"
        protocol: "TCP"
        port: 80
        targetPort: 80 
---
  kind: "Endpoints"
  apiVersion: "v1"
  metadata:
    name: "svc-to-external-web" 
  subsets: 
    - addresses:
        - ip: "8.8.8.8" #The IP Address of the external web server
        ports:
        - port: 80 
          name: "apache"

for example, with above, if we create a service without any selector (of any pods/deployments), and manually create an endpoint with the same name as the service, then we can configure where the endpoint point to, which could be an external IP address, or maybe to a docker service

like

apiVersion: v1
kind: Endpoints
metadata:
  name: svc-to-external-web
subsets:
- addresses:
  - ip: 192.168.64.1 # minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'"
  ports:
  - port: 80

ref: https://theithollow.com/2019/02/04/kubernetes-endpoints/

pod stuck at pending with custom scheduler

i have created a custom scheduler, which has been in running state.

however, when i create a new pod, assigning it to the custom scheduler, through .spec.schedulerName. the pod has been stuck in pending state.


here are the sample configuration

scheduler.yml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: my-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    - --port=10351
    - --scheduler-nam=my-scheduler
    - --secure-port=10359
    image: k8s.gcr.io/kube-scheduler:v1.16.0
    imagePullPolicy: IfNotPresent

pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  schedulerName: my-scheduler
  containers:
  -  image: nginx
     name: nginx

turns out the issue was with the --leader-elect. seems like the scheduler is not able to move forward with the --leader-elect set to true.

error running etcd in minikube

when I am trying to run etcdctl in minikube, there has been an exception:

commands:

ETCDCTL_API=3 etcdctl get "" --prefix=true
ETCDCTL_API=3 etcdctl get "" --from-key

Exception

{"level":"warn","ts":"2020-05-01T09:32:07.933Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-ea0c78af-0a9d-4092-8722-75fed707e112/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection closed"}
Error: context deadline exceeded

the solution is to provide the needed ca files

ETCDCTL_API=3 etcdctl get "" --prefix=true --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/server.crt --key=/var/lib/minikube/certs/etcd/server.key

the cert can be found by running

kubectl get pod etcd-minikube -n kube-system -o yaml

babel 7 with typescript

according to this post, https://devblogs.microsoft.com/typescript/typescript-and-babel-7/, it seems like babel 7 should work with typescript.

however, i have encountered quite a lot surprises during the integration.

here are the changes I have done:

install the typescript preset

yarn add --dev @babel/preset-typescript @babel/preset-env @babel/plugin-proposal-class-properties @babel/plugin-proposal-object-rest-spread

then configure babel.config.json

(most importantly, i have a "macro" plugin, that’s the reason I would like to switch from tsc to babel as i would like to leverage on the macros)

{
  "presets": [
    "@babel/env",
    "@babel/preset-typescript"
  ],
  "plugins": [
    "@babel/proposal-class-properties",
    "@babel/proposal-object-rest-spread",
    "macros"
  ],
  "ignore": [
    "node_modules"
  ]
}

then in package.json, add these scripts

    "build": "babel . -d lib --extensions '.ts,.tsx'",
    "apprun": "babel-node --extensions '.ts,.tsx' index.ts",
    "rundev": "build && nodemon lib/app/app.js",
    "runProd": "build && node lib/app/app.js"

babel-node is working fine.

however, the babel built output seems quite often is not complete.

for once, it complains transpiled graphql_macro_1 is not defined, as this line is missing from the generated javascript file:

const graphql_macro_1 = require("graphql.macro");

then at this moment, it’s complaining this line

as this variable, regeneratorRuntime is called but looks like nowhere it has been defined in the generate javascript file.

this is the complete package.json, with the relevant packages and scripts

{
  "name": "frontend",
  "version": "1.0.0",
  "license": "MIT",
  "dependencies": {
....
    "graphql": "^15.0.0",
    "graphql.macro": "^1.4.2",
...
  },
  "scripts": {
...
    "build": "babel . -d lib --extensions '.ts,.tsx'",
    "apprun": "babel-node --extensions '.ts,.tsx' index.ts",
    "rundev": "build && nodemon lib/app/app.js",
    "runProd": "build && node lib/app/app.js"
  },
  "devDependencies": {
    "@babel/cli": "^7.8.4",
    "@babel/core": "^7.9.0",
    "@babel/node": "^7.8.7",
    "@babel/plugin-proposal-class-properties": "^7.8.3",
    "@babel/plugin-proposal-export-default-from": "^7.8.3",
    "@babel/plugin-proposal-export-namespace-from": "^7.8.3",
    "@babel/plugin-proposal-object-rest-spread": "^7.9.5",
    "@babel/preset-env": "^7.9.5",
    "@babel/preset-flow": "^7.9.0",
    "@babel/preset-typescript": "^7.9.0",
    "babel-watch": "^7.0.0",
...
  }
}

vs https://www.typescriptlang.org/docs/handbook/compiler-options.html, tsc is doing much better job. the only caveat is, it cannot generate the necessary babel macros.

ref:

https://babeljs.io/docs/en/babel-preset-typescript#docsNav

https://iamturns.com/typescript-babel/

the original post from microsoft below

Today we’re excited to announce something special for Babel users.Over a year ago, we set out to find what the biggest difficulties users were running into with TypeScript, and we found that a common theme among Babel users was that trying to get TypeScript set up was just too hard. The reasons often varied, but for a lot of developers, rewiring a build that’s already working can be a daunting task.

Babel is a fantastic tool with a vibrant ecosystem that serves millions of developers by transforming the latest JavaScript features to older runtimes and browsers; but it doesn’t do type-checking, which our team believes can bring that experience to another level. While TypeScript itself can do both, we wanted to make it easier to get that experience without forcing users to switch from Babel.

That’s why over the past year we’ve collaborated with the Babel team, and today we’re happy to jointly announce that Babel 7 now ships with TypeScript support!

How do I use it?

If you’re already using Babel and you’ve never tried TypeScript, now’s your chance because it’s easier than ever. At a minimum, you’ll need to install the TypeScript plugin.

npm install --save-dev @babel/preset-typescript

Though you’ll also probably want to get the other ECMAScript features that TypeScript supports:

npm install --save-dev @babel/preset-typescript @babel/preset-env @babel/plugin-proposal-class-properties @babel/plugin-proposal-object-rest-spread

Make sure your .babelrc has the right presets and plugins:

{
    "presets": [
        "@babel/env",
        "@babel/preset-typescript"
    ],
    "plugins": [
        "@babel/proposal-class-properties",
        "@babel/proposal-object-rest-spread"
    ]
}

For a simple build with @babel/cli, all you need to do is run

babel ./src --out-dir lib --extensions ".ts,.tsx"

Your files should now be built and generated in the lib directory.

To add type-checking with TypeScript, create a tsconfig.json file

{
  "compilerOptions": {
    // Target latest version of ECMAScript.
    "target": "esnext",
    // Search under node_modules for non-relative imports.
    "moduleResolution": "node",
    // Process & infer types from .js files.
    "allowJs": true,
    // Don't emit; allow Babel to transform files.
    "noEmit": true,
    // Enable strictest settings like strictNullChecks & noImplicitAny.
    "strict": true,
    // Disallow features that require cross-file information for emit.
    "isolatedModules": true,
    // Import non-ES modules as default imports.
    "esModuleInterop": true
  },
  "include": [
    "src"
  ]
}

and just run

tsc

and that’s it! tsc will type-check your .ts and .tsx files.

Feel free to add the --watch flag to either tool to get immediate feedback when anything changes. You can see how to set up a more complex build on this sample repository which integrates with tools like Webpack. You can also just play around with the TypeScript preset on Babel’s online REPL.

What does this mean for me?

Using the TypeScript compiler is still the preferred way to build TypeScript. While Babel can take over compiling/transpiling – doing things like erasing your types and rewriting the newest ECMAScript features to work in older runtimes – it doesn’t have type-checking built in, and still requires using TypeScript to accomplish that. So even if Babel builds successfully, you might need to check in with TypeScript to catch type errors. For that reason, we feel tsc and the tools around the compiler pipeline will still give the most integrated and consistent experience for most projects.

So if you’re already using TypeScript, maybe this doesn’t change much for you. But if you’re already using Babel, or interested in the Babel ecosystem, and you want to get the benefits of TypeScript like catching typos, error checking, and the editing experiences you might’ve seen in the likes of Visual Studio and Visual Studio Code, this is for you!

Caveats

As we mentioned above, the first thing users should be aware of is that Babel won’t perform type-checking on TypeScript code; it will only be transforming your code, and it will compile regardless of whether type errors are present. While that means Babel is free from doing things like reading .d.ts files and ensuring your types are compatible, presumably you’ll want some tool to do that, and so you’ll still need TypeScript. This can be done as a separate tsc --watch task in the background, or it can be part of a lint/CI step in your build. Luckily, with the right editor support, you’ll be able to spot most errors before you even save.

Second, there are certain constructs that don’t currently compile in Babel 7. Specifically,

  • namespaces
  • bracket style type-assertion/cast syntax regardless of when JSX is enabled (i.e. writing <Foo>x won’t work even in .ts files if JSX support is turned on, but you can instead write x as Foo).
  • enums that span multiple declarations (i.e. enum merging)
  • legacy-style import/export syntax (i.e. import foo = require(...) and export = foo)

These omissions are largely based technical constraints in Babel’s single-file emit architecture. We believe that most users will find this experience to be totally acceptable. To make sure that TypeScript can call out some of these omissions, you should ensure that TypeScript uses the --isolatedModules flag.

What next?

You can read up on the details from the Babel side on their release blog post. We’re happy that we’ve had the chance to collaborate with folks on the Babel team like Henry Zhu, Andrew Levine, Logan Smyth, Daniel Tschinder, James Henry, Diogo Franco, Ivan Babak, Nicolò Ribaudo, Brian Ng, and Vladimir Kurchatkin. We even had the opportunity to speed up Babylon, Babel’s parser, and helped align with James Henry’s work on typescript-eslint-parser which now powers Prettier’s TypeScript support. If we missed you, we’re sorry but we’re grateful and we appreciate all the help people collectively put in!

Our team will be contributing to future updates in the TypeScript plugin, and we look forward to bringing a great experience to all TypeScript users. Going forward, we’d love to hear your feedback about this new TypeScript support in Babel, and how we can make it even easier to use. Give us a shout on Twitter at @typescriptlang or in the comments below.

Happy hacking!

docker compose timeout

I was not able to run my `docker-compose` commands earlier:

for example dcd up

dcd ps


after turning on –verbose, dcd --verbose ps, turns out the issue is with one of the containers:

This is a front end container, which also explains why I am not able to hot reload the pages, even though I already have the volume mount.

and in case, docker kill is also not working, then do a restart of docker, followed by docker stop and docker rm

Update yarn indirect dependencies

It’s common that some indirect dependencies are deprecated due to some vulnerability issues. for example,

I have multiple indirect dependencies on `kind-of`

which are being used by various other dependencies (direct and indirect)



which recently found out there is an isuse with `kind-of`

https://github.com/advisories/GHSA-6c8f-qphg-qjgp

and to update this indirect dependencies, the solution is to add a solution block in package.json

after that, then just run `yarn`

it will update the dependencies to correct version

refer to https://classic.yarnpkg.com/en/docs/selective-version-resolutions/

https://itnext.io/fixing-security-vulnerabilities-in-npm-dependencies-in-less-than-3-mins-a53af735261d

Build a pipe from container to cluster

There are valid needs to talk to the Kubernetes cluster from segregated docker containers. It’s possible to do so:

Build the pipe from the container to the host machine

There are several ways to connect the host machine. the container is running together with the host, behaving like on the same subnet. you can access it through the public IP.

otherwise, more elegantly, you can leverage on host.docker.internal to talk to the host

Proxy the resources for the Kubernetes cluster

you can start a Kubernetes proxy to talk to the cluster.

kubectl proxy --port=8080 --disable-filter &

Access to the resources

then to talk to the resources in the cluster from the container, you can do

host.docker.internal:8080/api/v1/namespaces/default/services/pod:{port}/proxy/

This is related to, https://lwpro2.dev/2020/04/03/expose-existing-deployment-with-minikube/, as there is an issue with Kubernetes apiserver proxy, https://github.com/kubernetes/kubernetes/issues/89360, which would strip out the parameter for websocket.

The work around is to use minikube tunnel with LoadBalancer, as in above post.

openssl issue on OSX

in addition to, https://lwpro2.dev/2020/04/20/git-command-issue-with-openssl/, even though `brew uninstall, brew install` the specific old version of openssl worked for me with git.

however, it still breaks with poetry.

[SSLError]
HTTPSConnectionPool(host='privatepypi', port=443): Max retries exceeded with url: /pypi/table-understanding/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not availa
ble."))

the solution to work out for both is

brew switch openssl 1.0.2s

private pypi server issue

in addition to, https://lwpro2.dev/2020/04/20/build-a-private-pypi-with-github/, it has took me quite a while to figure out why certain packages i am not able to install using poetry:

Turns out, that’s because PEP has a very weird renaming:

https://www.python.org/dev/peps/pep-0503/#normalized-names

hence, i need to publish my package named as safe-logging, even though the binary is safe_logging:

even though, within the directory, it is named safe_logging

leverage on eslint and prettier together

eslint is now the defector linter for front end, even for typescript, especially with the deprecation of tslint.

eslint is able to do both formatting, like line length, trailing semi colon etc; at the same time, it’s able to do syntax checking as well, for example, unused or undefined variables alike.

while at the same time, prettier is really doing a good job for formatting the front end. it’s a very opinionated framework however, with very easy to customize configurations. for example,

{  
"semi": true,  
"trailingComma": "all",  
"singleQuote": true,  
"printWidth": 70
}

with the combination of both, we can leverage on the strength of both, to do lint and format:

npx eslint -c .eslintrc.json **/*.{ts,tsx} --fix ##for format or fix

npx eslint -c .eslintrc.json **/*.{ts,tsx} ## for lint

these are the set ups we need to prepare:

install the dependencies:

npm install -g prettier eslint
## or
yarn add prettier eslint

install the plugins

npm install --save-dev eslint-config-prettier eslint-plugin-prettier
## or
yarn add --dev eslint-config-prettier eslint-plugin-prettier

then configure .eslintrc

{
"root": true,
"parser": "@typescript-eslint/parser",
"plugins": ["@typescript-eslint","prettier"],
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended",
"plugin:prettier/recommended"
],
"rules": {
"@typescript-eslint/interface-name-prefix": 1
}
}


in addition, to use airbnb formatting:

npx install-peerdeps --dev eslint-config-airbnb

then add it into .eslintrc.json

{  
"extends": ["airbnb", "prettier"],  
"plugins": ["prettier"],  
"rules": {    
"prettier/prettier": ["error"]  
}}

Build a private pypi with github

recently, I have tried to build a pypi server on top of github.

here are the steps:

  1. create a repo, to host those packages
  2. at the root level, define the setup.py with the findpackages(), like
import setuptools

with open("README.md", "r") as fh:
   long_description = fh.read()

setuptools.setup(
   name="pypi",
   version="0.0.1",
   author="lwpro2",
   author_email="lwpro2",
   description="A pypi for python packages",
   long_description=long_description,
   long_description_content_type="text/markdown",
   url="https://github.com/lwpro2",
   packages=setuptools.find_packages(),
   classifiers=[
      "Programming Language :: Python :: 3",
      "License :: OSI Approved :: MIT License",
      "Operating System :: OS Independent",
   ]
)

3. at each packages, add the setup.py as well

import setuptools

with open("README.md", "r") as fh:
   long_description = fh.read()

setuptools.setup(
   name="package1",
   version="0.0.2",
   author="lwpro2",
   author_email="lwpro2",
   description="A fake library",
   long_description=long_description,
   long_description_content_type="text/markdown",
   url="https://github.com/lwpro2",
   packages=setuptools.find_packages(),
   classifiers=[
      "Programming Language :: Python :: 3",
      "License :: OSI Approved :: MIT License",
      "Operating System :: OS Independent",
   ]
)

the folder structure will be like

--pypi
------setup.py
------README.md
------package1
-------------setup.py
-------------README.md
-------------__init__.py
-------------functions.py
------package2
-------------setup.py
-------------README.md
-------------__init__.py
-------------functions.py

4. run this command to generate the binaries

python3 setup.py sdist bdist_wheel

the folder will become

--pypi
------setup.py
------README.md
------package1
-------------setup.py
-------------README.md
-------------__init__.py
-------------functions.py

-------------build/
-------------dist/
--------------------package1-0.0.1.tar.gz
------package2
-------------setup.py
-------------README.md
-------------__init__.py
-------------functions.py

-------------build/
-------------dist/
--------------------package2-0.0.1.tar.gz

5. then create a static pypi server which comply to Pep 503, like



6. then add the new pypi server into Pipfile and pyproject.toml

Pipfile

[source]
name = "privatepypi"
url = "http://private/pypi"
verify_ssl = false

[dev-packages]

[packages]
package1 = {version = "*", index = "privatepypi"}

pyproject.toml

[[tool.poetry.source]]
name = "privatepypi"
url = "https://private/pypi"
secondary = true

[tool.poetry.dependencies]
package2= "^0.0.1"

7. then can install packages using the commands as normal

pip install

poetry install

Possible Kubernetes exception

in addition to https://lwpro2.dev/2020/04/03/expose-existing-deployment-with-minikube/, with kubectl proxy --port=8080 --disable-filter, the container could talk to the kubernetes cluster.

however, if the proxy is not running or not running properly, it could result in two types of exceptions:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.8/http/client.py", line 1230, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.8/http/client.py", line 1276, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.8/http/client.py", line 1225, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.8/http/client.py", line 1004, in _send_output
self.send(msg)
File "/usr/local/lib/python3.8/http/client.py", line 944, in send
self.connect()
File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 184, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 168, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: : Failed to establish a new connection: [Errno 111] Connection refused

alternatively,

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 13463, in list_namespaced_service
(data) = self.list_namespaced_service_with_http_info(namespace, **kwargs) # noqa: E501
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 13551, in list_namespaced_service_with_http_info
return self.api_client.call_api(
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 340, in call_api
return self.__call_api(resource_path, method,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 172, in __call_api
response_data = self.request(
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 362, in request
return self.rest_client.GET(url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 237, in GET
return self.request("GET", url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 231, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (502)
Reason: Bad Gateway
HTTP response headers: HTTPHeaderDict({'Date': 'Tue, 14 Apr 2020 06:03:12 GMT', 'Content-Length': '0'})

docker compose variable substituion

Both $VARIABLE and ${VARIABLE} syntax are supported. Additionally when using the 2.1 file format, it is possible to provide inline default values using typical shell syntax:

  • ${VARIABLE:-default} evaluates to default if VARIABLE is unset or empty in the environment.
  • ${VARIABLE-default} evaluates to default only if VARIABLE is unset in the environment.

Similarly, the following syntax allows you to specify mandatory variables:

  • ${VARIABLE:?err} exits with an error message containing err if VARIABLE is unset or empty in the environment.
  • ${VARIABLE?err} exits with an error message containing err if VARIABLE is unset in the environment.

Expose existing deployment with minikube

in addition to https://lwpro2.dev/2020/03/23/expose-existing-deployment/, if the cluster is from minikube, there are some more options to expose the deployment.

Similar to kubectl port-forward svc/local-files-12a341c023 8889:8889, which expose the service to localhost.

Minikube can do a similar expose with:

`minikube service local-test-ecd44fa2fe --url`

for example, for existing service

local-test-ecd44fa2fe ClusterIP 10.96.222.209 8501/TCP,1337/TCP 15d

we can patch it,

kubectl patch svc local-test-ecd44fa2fe -p '{"spec": {"type": "NodePort"}}'

then run the minikube service,

minikube service local-test-ecd44fa2fe --url

which would then give us the URL for accessing the svc:

http://192.168.64.9:31012
http://192.168.64.9:31458

the svc is now updated with the host port:

local-test-ecd44fa2fe NodePort 10.96.222.209 8501:31012/TCP,1337:31458/TCP 15d

alternatively, we could also do tunnelling with minikube

for exmaple, if we patch existing svc:

kubectl patch svc local-files-12a341c023 -p '{"spec": {"type": "LoadBalancer"}}'

it would update the svc with the the ports, at the same time, if we run the tunnel:

minikube tunnel

it will give us the external-IP, (otherwise would be pending)

local-new-0608a5336b LoadBalancer 10.96.117.204 10.96.117.204 8501:30556/TCP,1337:32335/TCP 10d

now we will be able to the svc using the external-ip:port

at the same time, we can still do the

minikube service local-new-0608a5336b --url

which would give us the 192.168.x:port .

note: 192.168.x is the cluster IP:

Kubernetes master is running at https://192.168.xx.x:8443

which from the host, we can access that IP to get into the cluster.

while the IP 10.96.xx is the within cluster-ip, which however, with the tunnel would expose the host.

python websocket client and with auth header

# import asyncio
import ssl
from socket import socket

import websocket
# import websockets

def on_message(ws, message):
    print ('message received ..')
    print (message)


def on_error(ws, error):
    print ('error happened .. ')
    print (error)


def on_close(ws):
    print ("### closed ###")


def on_open(ws):

    print ('Opening Websocket connection to the server ... ')

    ## This session_key I got, need to be passed over websocket header isntad of ws.send.
    ws.send("testing message here")

websocket.enableTrace(True)

token = "........"
auth = "Authorization: Bearer " + token
ws = websocket.WebSocketApp("wss://APISERVER:8443/api/v1/namespaces/default/services/the-service:8889/proxy/websocket?token=123",
                            on_open = on_open,
                            on_message = on_message,
                            on_error = on_error,
                            on_close = on_close,
                            header = [auth]
                            )

ws.on_open = on_open

##Note: this is for --insecure flag in curl, basically to tell the client not verify the ssl certificate
ws.run_forever(sslopt={"cert_reqs": ssl.CERT_NONE})
# socket.setsockopt

get those APIServer and token using

APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
SECRET_NAME=$(kubectl get secrets | grep ^default | cut -f1 -d ' ')
TOKEN=$(kubectl describe secret $SECRET_NAME | grep -E '^token' | cut -f2 -d':' | tr -d " ")

curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure

Start a websocket server with tornado

import tornado.ioloop
import tornado.web
import tornado.escape
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.options
import time
import logging
import uuid
import sys,os
from tornado.options import define, options



class MainHandler(tornado.web.RequestHandler):
	def get(self):
		self.write("Hello, world")

def make_app():
	return tornado.web.Application([
		(r"/", MainHandler),
	])



define('port', default=8889, help="The tornado server port", type=int)


class WebSocketSever(tornado.websocket.WebSocketHandler):
	bao_cons = set()
	bao_waiters = {}
	global con_key
	global token

	# def initialize(self, my_object):
	# 	self.my_object = my_object

	def open(self):
		sole_id = str(uuid.uuid4()).upper()
		print(sole_id)
		self.con_key = sole_id
		self.token = self.get_argument("token")
		self.bao_waiters["{}".format(sole_id)] = self
		self.bao_cons.add(self)
		# self.write_message({"websocket_sole_id": sole_id})
		self.write_message({"token": self.token})
		logging.info("websocket opened!")
		print(self.bao_cons)

	def on_message(self, message):

		print(type(message))
		if message == "close":
			self.close()
			return
		try:
			parse_data = tornado.escape.json_decode(message)
			if parse_data["user"] and parse_data["content"]:
				user = parse_data["user"]
				content = parse_data["content"]
				if not user or not content:
					logging.info("Date is wrong!")
					return
				else:
					for key in self.bao_waiters.keys():
						if key == user:
							try:
								self.bao_waiters[key].write_message("{}".format(content))
							except Exception as e:
								logging.info(e)
							finally:
								logging.info("process finished!")
		except:
			for con in self.bao_cons:
				con.write_message(message)

	def check_origin(self, origin: str):
		return True

	def allow_draft76(self):
		return True

	def on_close(self):
		self.bao_cons.remove(self)
		# self.bao_waiters.pop(self.con_key)
		self.bao_waiters.pop(self.token)

		logging.info("websocket closed!")
		print(self.bao_cons)


class Application(tornado.web.Application):
	def __init__(self, handlers, setting):
		super(Application, self).__init__(handlers, **setting)


def main():
	options.parse_command_line()
	handlers = [
		(r"/websocket", WebSocketSever),
		(r"/http", MainHandler),
	            ]
	setting = dict(xsrf_cookies=False)
	app = Application(handlers, setting)
	print(options.port)

	app.listen(options.port)
	tornado.ioloop.IOLoop.current().start()


if __name__ == "__main__":
	# app = make_app()
	# app.listen(8889)
	# tornado.ioloop.IOLoop.current().start()
	main()

then to access the websocket server, for example, using javascript

var ws = new WebSocket("ws://127.0.0.1:8080/api/v1/namespaces/default/services/serices:8889/proxy/websocket?token=123");
ws.onopen = function() {
   ws.send("Hello, world");
};
ws.onmessage = function (evt) {
   alert(evt.data);
};

Expose existing deployment

expose the deployment, pod, or replicateset using the expose command

kubectl expose replicasets.apps existing-rc –port=9000 –target-port=9000 –type=NodePort –name=testport

otherwise, if you already have a service running, you can upgrade it

`kubectl patch svc existing-service -p ‘{“spec”: {“type”: “NodePort”}}’`

Note, if you already have cluster IP, you can also use LoadBalancer.

after that, you should have the service with the necessary port information

Screenshot 2020-03-18 at 12.11.36 PM

if you are using minikube locally, you can get the URL as:

minikube service –url the-new-service

Screenshot 2020-03-18 at 12.13.25 PM

alternatively, you could also achive this by port forward

kubectl port-forward svc/existing-service host-port:container-port

as such, the service could be reached at hostname:host-port, for example: 127.0.0.1:host-port

expose more ports for existing service

(base) ➜ ~ kubectl get service local-files-12a341c023 -o yaml



then patch accoring to the spec:

kubectl patch services local-files-12a341c023 --type='json' -p='[{"op": "add", "path": "/spec/ports/-", "value": {"name":"tornado","port":8889,"targetPort": 8889,"protocol":"TCP"}}]'

expose Kubernetes cluster service

expose the deployment, pod, or replicateset using the expose command

kubectl expose replicasets.apps existing-rc –port=9000 –target-port=9000 –type=NodePort –name=testport

otherwise, if you already have a service running, you can upgrade it

`kubectl patch svc existing-service -p ‘{“spec”: {“type”: “NodePort”}}’`

 

Note, if you already have cluster IP, you can also use LoadBalancer.

after that, you should have the service with the necessary port information

Screenshot 2020-03-18 at 12.11.36 PM

if you are using minikube locally, you can get the URL as:

minikube service –url the-new-service

Screenshot 2020-03-18 at 12.13.25 PM

docker container to talk to Kubernetes Cluster

There could be need to talk to Kubernetest cluster from segregated docker containers. It’s possible to do so:

 

 

build the pipe from the container to the host machine

There are several ways to connect the host machine. the container is running together with the host, behaving like on the same subnet. you can access it through the public IP.

otherwise, more elegantly, you can leverage on host.docker.internal to talk to the host

 

proxy the resources for the Kubernetes cluster

you can start a Kubernetest proxy to talk to the cluster.

kubectl proxy --port=8080 --disable-filter &

 

 

then to talk to the resources in the cluster from the container, you can do

host.docker.internal:8080/api/v1/namespaces/default/services/pod:{port}/proxy/

 

Kubernetes service vs deployment

Kubernetes Service vs Deployment

What’s the difference between a Service and a Deployment in Kubernetes?

A deployment is responsible for keeping a set of pods running.

A service is responsible for enabling network access to a set of pods.

We could use a deployment without a service to keep a set of identical pods running in the Kubernetes cluster. The deployment could be scaled up and down and pods could be replicated. Each pod could be accessed individually via direct network requests (rather than abstracting them behind a service), but keeping track of this for a lot of pods is difficult.

We could also use a service without a deployment. We’d need to create each pod individually (rather than “all-at-once” like a deployment). Then our service could route network requests to those pods via selecting them based on their labels.

Services and Deployments are different, but they work together nicely.

 

source: https://matthewpalmer.net/kubernetes-app-developer/articles/service-kubernetes-example-tutorial.html

 

 

 

Contract first vs code first: history is repeating itself

With the increased popularity of Graphql, so does the popularity of the frameworks such as (for python) graphene and Ariadne.

They are two frameworks able to reach the same target: a working GraphQL backend service catering for Graphql queries, mutation, and subscriptions.

However, they are taking opposite routes.

Graphene is taking code first route. So developers would write the python to tell the services the information it can server (resolve or mutate) and how to serve them. Something like:

import graphene
# Define your domain object, you can couple it with Django if they are from DB
class Company(DjangoObjectType):
    class Meta:
        model = dm.Company
....
    def resolve_staffs(self, info, **kwargs):
        return self.staffs.all()
## Define your resolvers
class CompanyQuery(graphene.ObjectType):
    companies= graphene.List(graphene.NonNull(Company), required=True)
    company = graphene.Field(Company)

    def resolve_company(self, info, **kwargs):
....
        return dm.User.objects.get(user_id=user_id)

    def resolve_companies(self, info, access_group_id=None):
        model = dm.Company.objects

....
        return model.filter(....)
## include into the Query
class Query(
    CompanyQuery,
....
):
    pass
## include different types
schema = graphene.Schema(query=Query, ....)

It starts with the code, to define the domain object and then the service how to resolve that instance, then will generate the contract people can reach:

For Ariadne, it will start the other way around, so we will first define the schemas:

## define the domain object
type CompanyInfo {
    name: String!
    address: [String!]!
    staff: [Satff!]!
    owner: String!
....
}
## define the resolver
type Query {
    companies: [CompanyInfo!]!
....
}
## Then we load the schema
schema = load_schema_from_path("...graphql")
## Then link the schema with the resolver
make_executable_schema(
    schema, company_resolver()....
)
## define the resolver
company = ObjectType("CompanyInfo")

@company.field("address")
def resolve_address(
    _: None, info: GraphQLResolveInfo, ....
) -> FrameworkInstance:
    return get_address(....)

Both routes will reach the same destination:

Now come with the debate: some prefer the Ariadne approach, claiming it’s for better decoupling.

However, looking back at history, this is the same debates happened many times:

We have this debate with web services, between contract first vs contract last, WSDL to code vs Code to WSDL.

We have this debate with ORM, whether domain first or last.

But what has happened? Both approaches survived through time.

Actually similar to DI, a lot of developers are aware of the decoupling writing the components through separate configuration files (XML). However, tight coupling wins out, we are almost using annotation everywhere.

The claim of schema first or contract first, so that it could be decoupled and have a single source of truth, actually worked out with code first as well. Look at what happens at REST services. We start with code first, write the services, then leave out to swagger to expose the single source of truth (the contract).

Ultimately, I think it boils to the maturity of the tooling. If either side has supreme tooling over the other side, it will win out regardless of concepts.

 

 

View at Medium.com

Build a chrome extension to resume the browsing experience revolution

Build a chrome extension to upgrade browsing experience

 

View at Medium.com

 

 

 

However, here is a quick summary of the steps:

Create a javascript project

This is no requirements on the frameworks, developers can choose whichever prefered either plain javascript, react, angular or vue. I have been building extensions with jquery, angular and react. As long as the final package can be rendered as normal web page, it will work with chrome extension.

Set up the manifest

We need to have the manifest set up. Here are some of the configurations:

most importantly, we need to have the background, popup, options and content script configured as needed. To invoke the extension by keyboard, the browser and page actions need to be set up.

Build and Test out

You can load the extension locally using the `developer mode`

Note: if you are using react and react-scripts for packaging, you need to set up for the CSP policy:

Publish to the market

You need to create a developer account and publish using the chrome developer console.

 

 

View at Medium.com

Poetry dependency update

When updating the dependency version in pyproject.toml, for example

[tool.poetry.dependencies]
python = “^3.8.1”
streamlit = “^0.51”. ## ==> update to 0.56

If there is already existing a poetry.lock, poetry would throw

(streamlit-base) jackie@jackie streamlit-base % poetry install
Installing dependencies from lock file

[NonExistentKey]
‘Key “hashes” does not exist.’

install [–no-dev] [–dry-run] [-E|–extras EXTRAS] [–develop DEVELOP]

The reason being the hashes in the lock file doesn’t match the content with pyproject.tom.

The way to sort out this is to remove the lock file, then do the install again to generate a new file:

poetry install

SideCar in Kubernetes Cluster

Service Mesh is a design to provide an infrastructure as a service layer within the cloud service. Sidecar is one example of such implementation.

SideCar as its name is acting as a decoupled component attached to other microservices to cater to those cross-cutting concerns. One example is a volume mount across microservices.

As mentioned in my other post, mount S3 as share drive, it’s a great feature to be able to access the S3 for CRUD operations. Some common use would embed the volume mount into each container or pod whichever needs the S3 access.

However, there is a security concern with this.

The ACL needed to mount the FUSE drive is `SYS_ADMIN` at a minimum. So to run this as a single container, we need to provide:

docker run --rm -it --cap-add SYS_ADMIN s3-sidecar bash

to run it with docker-compose:

s3-sidecar:
restart: on-failure
image: s3-sidecar
init: true
build:
context: s3-sidecar
target: dev
environment:
- DEPLOYMENt=STAGING
privileged: true
cap_add:
- SYS_ADMIN # This is needed for mounting the volume

to run it in Kubernetes:

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
selector:
matchLabels:
app: s3-sidecar
template:
metadata:
creationTimestamp: null
labels:
app: volume-provider
spec:
containers:
- image: {{ .Values.aws.env }}s3-sidecar
imagePullPolicy: IfNotPresent
name: s3-sidecar
volumeMounts:
- name: s3
mountPath: /s3
mountPropagation: Bidirectional
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
restartPolicy: Always
status: {}

all these would expose the uplifted privilege to the container and pod. With this access, there are ways some experienced developers would be able to bypass the designated location /s3 for example in the above pod and write or delete other files within the VFS.

Examples:

https://www.exploit-db.com/exploits/47147

https://kubernetes.io/blog/2018/04/04/fixing-subpath-volume-vulnerability/

Unless the permission required for FUSE mounting is corrected, it’s important to segregate the component doing this mounting.

The implementation of this segregation is sidecar. So instead of embedding the mounting into each container or pod, we will create a dedicated sidecar to do the single point mounting. We will have different security control for this single component to disable it being exposed or exploited. While at the same time, for those containers or pods need to access S3, they will just have read-only access into the sidecar volume.

Here is the implementation:

For the sidecar, it will be provided with the needed permission `SYS_ADMIN` to run the mounting.

Note: the mountPropagation should be Bidirectional, as we need the access to be able to update the content back into S3

kind: Service
apiVersion: v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
ports:
- port: 8000
targetPort: 8000
selector:
app: s3-sidecar
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: s3-sidecar
template:
metadata:
creationTimestamp: null
labels:
app: volume-provider
spec:
containers:
- image: {{ .Values.aws.env }}s3-sidecar
imagePullPolicy: IfNotPresent
name: s3-sidecar
resources: {}
env:
- name: DEPLOYMENT
value: -{{ .Values.deployment }}
volumeMounts:
- name: s3
mountPath: /s3
mountPropagation: Bidirectional
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
restartPolicy: Always
volumes:
- name: s3
hostPath:
path: /s3
status: {}

for each individual container and pod need the access:

vol = client.V1Volume(
name="s3",
host_path=client.V1HostPathVolumeSource(path="/s3"),
)
s3 = client.V1VolumeMount(
name="s3",
mount_path="/s3",
mount_propagation="HostToContainer",
read_only=True
)
client.AppsV1Api().create_namespaced_replica_set(
...
V1ReplicaSet(
...
spec=V1ReplicaSetSpec(
...
template=V1PodTemplateSpec(
... spec=V1PodSpec(
volumes=[vol],
containers=[
V1Container(
... volume_mounts=[s3],
image_pull_policy="IfNotPresent",
)
],
),
),
),
),
)

Note: we will limit the mountPropagation to HostToContainer. So that if write or update to the mount place or sub directory, they will be available in these containers. However, these containers shouldn’t propagate contents back into S3.

This is the topology:

pipenv

issue with pipenv

(base) ➜ lib git:(develop) ✗ python –version
dyld: Library not loaded: @executable_path/../.Python
Referenced from: /Users/jackie/.local/share/virtualenvs/streamlit-GdDAcdiW/bin/python
Reason: image not found
[1] 81477 abort python –version

(base) ➜ lib git:(develop) ✗ ..
dyld: Library not loaded: @executable_path/../.Python
Referenced from: /Users/jackie/.local/share/virtualenvs/streamlit-GdDAcdiW/bin/python
Reason: image not found

(base) ➜ streamlit git:(develop) ✗ make all-devel

dyld: Library not loaded: @executable_path/../.Python
Referenced from: /Users/jackie/.local/share/virtualenvs/streamlit-GdDAcdiW/bin/python
Reason: image not found

solution:
1. reinstall pipenv

(base) ➜ streamlit git:(develop) ✗ brew uninstall pipenv
Uninstalling /usr/local/Cellar/pipenv/2018.11.26_3… (1,483 files, 21.3MB)

(base) ➜ streamlit git:(develop) ✗ brew install pipenv

  1. clear old virtualenv

    (base) ➜ lib git:(develop) ✗ rm -rf pipenv --venv
    (base) ➜ lib git:(develop) ✗ pipenv --venv
    No virtualenv has been created for this project yet!
    Aborted!
    (base) ➜ lib git:(develop) ✗ pipenv --rm
    No virtualenv has been created for this project yet!
    Aborted!

  2. recreate new env with specific version (exist on local OS)

    (base) ➜ lib git:(develop) ✗ pipenv --python 3.7
    Warning: the environment variable LANG is not set!
    We recommend setting this in ~/.profile (or equivalent) for proper expected behavior.
    Creating a virtualenv for this project…

nvm

commands to list current installed node version:

(base) ➜ streamlit git:(develop) ✗ nvm ls
v10.8.0
-> system
default -> 10.8.0 (-> v10.8.0)
node -> stable (-> v10.8.0) (default)
stable -> 10.8 (-> v10.8.0) (default)
iojs -> N/A (default)
unstable -> N/A (default)
lts/* -> lts/erbium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.18.1 (-> N/A)
lts/erbium -> v12.14.1 (-> N/A)

to switch different versions:

(base) ➜ streamlit git:(develop) ✗ nvm use system
Now using system version of node: v12.16.0 (npm v6.13.4)

to set default version

(base) ➜ streamlit git:(develop) ✗ nvm alias default system
default -> system
(base) ➜ streamlit git:(develop) ✗ nvm ls
v10.8.0
-> system
default -> system
node -> stable (-> v10.8.0) (default)
stable -> 10.8 (-> v10.8.0) (default)
iojs -> N/A (default)
unstable -> N/A (default)
lts/* -> lts/erbium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.18.1 (-> N/A)
lts/erbium -> v12.14.1 (-> N/A)

Kubernetes ports

There are three kinds of ports widely used in kubernetes resource management, for example:
port is the port exposed within the cluster. so other nodes/pods from same cluster can access it through service:port
targetPort is the port exposed from within the pod.
nodePort is the port exposed to outside world. so on public network, it could be accessed as public-ip:nodePort


kind: Service
apiVersion: v1
metadata:
labels:
app: nginx
name: nginx
spec:
ports:
- port: 443
targetPort: 443
- nodePort: 31234
selector:
app: nginx

Mount S3 as share drive

Detailed steps to mount the S3 bucket as share drive

AWS S3 is a popular choice nowadays for cloud storage. As Amazon claims:

Amazon S3 is designed for 99.999999999% (11 9’s) of data durability because it automatically creates and stores copies of all S3 objects across multiple systems.

There are common needs we would like to mount the cloud drive (as a shared drive or local disk). As such we can access the cloud drive (S3 bucket here) from other cloud services or even our local OS. It will be super convenient for viewing, writing and updating files in the cloud.

The tool I have leveraged here for mounting is called s3fs, which is to mount the S3 as the FUSE file system.

Here are the steps to achieve the state:

Create the S3 bucket

As a preliminary, we need to have the S3 bucket created.

There are two ways to create the bucket, either from the web console or using the AWS CLI.

Some prefer to use the web console, as it’s quite intuitive to access the console and create the bucket there. Most of the cases it’s just to click the create bucket button then follow the steps:

You will be able to review and make changes if needed before the bucket is created:

most of the cases you would like to Block all pubic access.

Note, when you create the bucket using the console, you need to select the region. Make sure you choose the region as close as possible physically to your server/computer location where you would access the bucket. These are the regions and corresponding codes at the moment of writing:

However, once the bucket created, you will note that:

Don’t get confused, as the S3 console will display all buckets regardless of regions. You can find the actual region the bucket is residing in the corresponding column.

Alternatively, if you are more familiar with the AWS CLI, you might like to create the bucket from the command line.

aws s3api create-bucket --bucket data-bucket --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1

you would need to have your AWS CLI set up properly before you are able to run the command:

aws configure

AWS Access Key ID [None]:

AWS Secret Access Key [None]:

Default region name [None]: us-west-2

Default output format [None]: json

after this, it will generate a valid config and credential file

~/.aws/config

~/.aws/credentials

set up the proper access

After the bucket is created, you need to set up the proper access. There are two approaches to set up for S3 access, either using s3 policy or through IAM policy.

Personally I think the IAM policy should be the defacto place for controlling most of the AWS resources access. The reason being it’s decoupled. It’s specifically for access control alone, while not tied to any specific role or buckets. While at the same time, S3 policy is bucket specific, which would work if your bucket is eternal.

However, you have both choices, you can make your own judgment here.

For IAM policy, you need to create the role, then associate the principal/person who needs to access the bucket with the role:

Note: you need to grant these access to the bucket

“s3:ListBucket”,

“s3:PutObject”,
“s3:GetObject”,
“s3:DeleteObject”

so in the policy, it would be something like this:

for the object access, make sure you grant to both resources

arn:aws:s3:::bucket
arn:aws:s3:::bucket/*

you can use the AWS simulator for confirming your set up is correct with the access:

https://policysim.aws.amazon.com/home/index.jsp?#

alternatively, you can grant specific S3 bucket access:

install s3fs either from source or package

after you have the S3 created with the proper access, you can now proceed with the installation of the s3fs.

You can either install the package directly, for example:

#ubuntu

sudo apt install s3fs #from source

Alternatively, in case you would like to build from source with any customization:

git clone https://github.com/s3fs-fuse/s3fs-fuse.git cd s3fs-fuse ./autogen.sh ./configure make sudo make install

mount by role

Now with the bucket and s3fs installed, you can do the real mounting:

mkdir /mnt-drive && s3fs -o iam_role=”role-from-step-2” -o allow_other S3-bucket /mnt-drive

in most cases, especially if you would like to access the mount from other cloud services, you need to mount it by role. Normally those roles are tied to the cloud resources.

for example, if you would like to access it from an EC2 instance, you just need to grant the role to that EC2 instance would work.

mount by key

In most cases, you might not have the access key, as this normally it’s owned by system admin. But just in case if you do have, you can mount it using your AWS access keys:

echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs

mkdir /mnt-drive && s3fs -o passwd_file=${HOME}/.passwd-s3fs -o allow_other S3-bucket /mnt-drive

Till here, you would be able to access your S3 bucket from your mounted place:

FTP

in addition, in case you would like to expose the mount place as a FTP server:

install vsftpd

systemctl start vsftpd

then you can access it from your FTP client

python logs within kubernetes

by default, the logs won’t print out from the kubernetes pod. this is because Kubernetes would buffer the stdin, stdout, stdout.

the way to sort this out is make

PYTHONUNBUFFERED

If this is set to a non-empty string it is equivalent to specifying the -u option

in helm chart, something like this

kind: Service
apiVersion: v1
metadata:
  labels:
    app: example
  name: example
spec:
  ports:
  - port: 5000
    targetPort: 5000
  selector:
    app: example
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: example
  name: example
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: example
    spec:
      containers:
      - image: example
        imagePullPolicy: IfNotPresent
        name: example
        ports:
        - containerPort: 443
        resources: {}
        env:
          - name: PYTHONUNBUFFERED
            value: "0"
      restartPolicy: Always
status: {}

docker privilege

 

the -privilege is powerful yet dangerous:

https://blog.trendmicro.com/trendlabs-security-intelligence/why-running-a-privileged-container-in-docker-is-a-bad-idea/

 

while the alternative is just to mount the sock

docker run -v /var/run/docker.sock:/var/run/docker.sock 

it’s not strictly docker-in-docker, but it should be able to serve most use cases.

options to configure AWS provider in terraform

provider “aws” {
region           = “us-east-1”
access_key  = “your-access-key-here”
secret_key   = “your-secret-key-here”
}

or point to the profile

provider “aws” {
region                                  = “us-east-1”
shared_credentials_file  = “~/.aws/credentials”   //default: “~/.aws/credentials”
profile                                  = “tf-admin”                        //default: “default”
}

Set up container runtime variable

As the image command is only run during build time, however, while running the container, we might need to access some environment or configuration variable. here is the workaround:

ARG variable=unknown ## Build time
ENV variable=${variable} ## Run time

Then to pass in the arg,

docker build--build-arg version=0.0.1

for docker-compose, then

container:
  image: image
  restart: always
  build:
    context: dockerfile
    args:
      version: ${version} ## alternatively, default a value here

for docker-compose,

${version}

can be used to retrieve the environment variable

terraform plan

been running terraform plan on some limited changes on the .tf file.

however, the plan always generate whole project as need to add, even with forced refresh

terraform refresh

 

turns out the solution is to sync the workspace. make sure, the workspace you are synching with intended.

terraform workspace list

terraform workspace select master

Tab & URL manager

Upgrade your chrome experience. Leverage on this app to handle tabs and urls navigation.
You can maximize your screen space, use Chrome in complete full screen mode by dropping off the address bar and the tab bar.

* type a URL followed by enter to visit the specified address
* type a term followed by enter to search the term (the search engine is configurable)
* type a term to navigate to any popular search suggestions
* type a keyword to navigate to any mapped URL (the map is configurable)
* type a term to navigate to any open tabs
* type a term to get the likely result you would like to visit


**Configurations**
change the search engine from the options page
configure the keyword and URL mapping

**Keyboard Shortcut:**
 
Windows: Alt+L

Mac: ⌥+L

 

https://chrome.google.com/webstore/detail/tab-url-manager/egiemoacchfofdhhlfhkdcacgaopncmi

mount AWS S3 as share drive

AWS S3 is a popular choice for cloud storage, due to its cost and stability.

It will be super convenient if we can mount the S3 bucket, so that to access it through FTP or from local OS directly.

 

Steps:

  1. create the S3 bucket
  2. set up the proper access
    1. either through s3 policy
    2. or through IAM policy
      1. if through IAM policy, need to create the role, associate the role with the policy
  3. install s3fs either from source or package
    #ubuntu
    sudo apt install s3fs
    
    
    #from source
    
    git clone https://github.com/s3fs-fuse/s3fs-fuse.git
    cd s3fs-fuse
    ./autogen.sh
    ./configure
    make
    sudo make install

     

  4. mount by role

     

    mkdir /mnt-drive && s3fs  -o iam_role=”role-from-step-2” -o allow_other S3-bucket /mnt-drive

     

  5. mount by key
    1. mkdir /mnt-drive && s3fs -o passwd_file=${HOME}/.passwd-s3fs
      -o allow_other S3-bucket /mnt-drive
  6. FTP
    1. install vsftpd

      systemctl start vsftpd

 

 

ZenHelper

 

Full Screen your chrome, then leverage ont this plugin for URL box
Enjoying Chrome in Completely Full Screen mode even without omnibox (Zen). Leave the address typing to ZenHelper.

Keyboard Shortcut: 
windows: Ctrl+Shift+L
mac: Cmd+Shift+L


Note: This is still beta stage, new feature WIP.

https://chrome.google.com/webstore/detail/zenhelper/odpfhihammejhokpbbkbhlmhnoipglmc

 

https://chrome.google.com/webstore/detail/tab-url-manager/egiemoacchfofdhhlfhkdcacgaopncmi

Map vs List comprehension in Python – DEV Community 👩‍💻👨‍💻

https://dev.to/lyfolos/map-vs-list-comprehension-in-python-2ljj

Had exactly this discussion the other day among several strong opinioned programmer.

View at Medium.com

After all its technology both still in exist, and the exist is with a reason. It’s ultimately boil down to personal preference, but neither should be forced as superior.

The craziness

def create_multipliers():
multipliers = []

for i in range(5):
def multiplier(x):
return i * x
multipliers.append(multiplier)

return multipliers

for multiplier in create_multipliers():
print(multiplier(2))

print(“=========================”)

def create_multipliers_lambda():
return [lambda x : i * x for i in range(5)]

for multiplier in create_multipliers_lambda():
print(multiplier(2))

print(“=========================”)

def create_multipliers_fix():
return [lambda x, i=i : i * x for i in range(5)]

for multiplier in create_multipliers_fix():
print(multiplier(2))
8
8
8
8
8
=========================
8
8
8
8
8
=========================
0
2
4
6
8

status for asynchronous process

it’s not a completely novel requirement to get the status for asynchronous processing, for example, to get the file upload progress, to get the order status, alert for processing failures etc.

IMO, there are mostly two approaches for it,
1.synchronous
whether it’s through REST/GraphQL/Queue/DB/FS, if the client is polling for status, that’s a synchronous call, which kind of patched for the original fire and forget async process.

  1. asynch
    this would still be a patch for the original async process, however, it’s really valid. since it’s a “new” requirement from the client to get the status. instead of polling, which almost removed the merits of the original async call, the client could expose a webhook for the server to post back status asynchronously. as such, even though it’s a patch, it will be patched async, leave both processes fire and forget.

https://stackoverflow.com/questions/54841672/get-status-of-asynchronous-invocationtype-event-aws-lambda-execution/58870421#58870421
https://docs.aws.amazon.com/step-functions/latest/dg/sample-project-job-poller.html

access the parent docker daemon from the container

recently i have a need to build/start/stop some sibling containers (vs docker within docker), the way to do it is to expose a pipelien from the host to the container:

for single container:

docker run -v /var/run/docker.sock:/var/run/docker.sock

for docker compose

services:
container-to-control-other-sibling-containers:
image: xyz
build:
context: .folder/to/the/controller/container/image
ports:
- 5000:5000
volumes:
- ./:/app
- /var/run/docker.sock:/var/run/docker.sock

View at Medium.com

actually the dameon could listen from other host (configuration):
https://docs.docker.com/v17.09/engine/admin/#configure-the-docker-daemon

update dockerfile within docker compose

have encountered some issue with the stale dockerfile. turns out, docker compose actually cache previous builds (this is not stated in the doc).

so to keep it updated, need to run build without cache then bring it up.


docker-compose build --no-cache && docker-compose up

the interesting state of monorepo

it might be true that Google or Facebook has some monorepo alive since more than a decade or two decades ago, however, that doesn’t means it’s the right approach for any new projects to adapt in 2019.

tech has been improving on loose coupling and SoC, which is a design idea since GOF and even earlier. this has continuously generating the benefits from version control, project segregation, building, CI/CD and micro services, which resulted faster and better quality delivery and long term maintenance. I will be surprised if monorepo would flourish in next decade or any near future.

quite interesting to see all those ideas/noises/debates come back and forth though.

View at Medium.com

python performance

it’s a dilemma, the reason why the language become more popular, (especially among the new or maybe amateur programmers) is due to its loose design, easy to learn (at the cost of not to leverage on the static typing, multi-threading concurrency, and JIT and AOTs for example), which is the exact reason for its being not as performing as the other languages.

https://hackernoon.com/why-is-python-so-slow-e5074b6fe55b

Microsoft announces it’s ready to contribute to OpenJDK

Its been around a decade since MS tried to emulate the success of java, ever since j++ to. Net c#.

Now

MS is a java shop now.

Java is not only de facto langue for thoughtful enterprise app, its ability to keep evolving from several years release cycle to half yearly, and with so many version poped up each with up to date features design principal, yet as the same time as backward compatible as possible (its unimaginable of the python versioning issue from a java world.) java is very mature stable language yet growing at even faster speed than any new lanagues with the very solid design carefully thought of language principle, its able to grow even faster and at the same time as always stable and trustworthy.

https://jaxenter.com/microsoft-ready-contribute-openjdk-163550.html

DevOps now most sought-after skill, and with good reason

Years back, it was very much a hassle and careful thought process to build the right projet skeleton with the right code structure and inline libraries. After that it’s another equally if not more challenging to pave the runway for the poc to build test deploy and put on suitable scale of Web servers/ containers, application server, and databases with normalisation and vertical horizontal scaling.

All these efforts have now significantly saved thanks to all new frameworks which bundle all these together, providing an optioned suite.

With the popular of the frameworks and tools, then come with now a new challenges, which is to properly leverage on the tools.

Aws for example, has shoot up from single digit toolset size to now probably hundred.

Container has gone from vm chroot jail docker cluster k8s swam.

There is definitely value added to have the skills (DevOps or admin or infra) to pave the right platforms at this era.

However, I won’t be surprised as tide moves, there will be tools to bundle these tools together and save all these skills or effort in future. For example, the recent Pivotal conference to ask developers to forget k8s.
https://www.zdnet.com/article/devops-now-most-sought-after-skill-survey-finds/

Authenticating Amazon ECR Repositories for Docker CLI with Credential Helper

The default way to authen then talk with registry is through

docker login.

The user name is aws and password could be retrieve using

Aws ecr get-token

So far it’s pretty straightforward.

However, there is a caveat there. The token from aws CLI is valid for 12 hours only, this is aws’s approach to secure the access, in case the token is compromised, it’s to be expired then only authorised could retrieve the new token.

One possible approach to keep the docker CLI work is to refresh the

Docker login

Every 12 hours. Which is not difficult however is very ugly.

Instead, aws has this Credential helper. So with the Aws-ecr-Credential-helper installed, when we run docker CLI, it’s able to pick up the config from ~/.docker/config.json

"credHelpers": {
		"aws_account_id.dkr.ecr.region.amazonaws.com": "ecr-login"
	}

That it would leverage on the helper to talk to the specific ecr instance. And the helper in turn would leverage on pre-configured ~/.aws/credential & ~/.aws/config to pick up the right access key and secret etc to talk with ecr.

This is a cool solution not only for Docker CLI but actually a lot serverless platform as well which relies on containers.

https://aws.amazon.com/blogs/compute/authenticating-amazon-ecr-repositories-for-docker-cli-with-credential-helper/

K8s client authentication

Have been working on some serverless framework recently, which i have put onto EKS.

most of the stuff worked, except the cli, which leveraged on k8s client-go library to authen is not able to do so with EKS. (working well with Azure AKS and GCP).

turns out the issue was with k8s client-go library, which doesn’t deal with aws-iam-authenticator. as a work around, the patch is to apply the service account as a bearer token.


//command to get the token
kubectl describe secret account -n namespace | grep -E '^token' | cut -f2 -d':' | tr -d " "

then in the client-go, patch the token into the bearer header:


//retrieve the token either from secret file or env var
//token, err := ioutil.ReadFile("~/secrets/kubernetes.io/serviceaccount/" + v1.ServiceAccountTokenKey)
//token := os.Getenv("BEARER_TOKEN")

//add the header if its not yet there
r.headers.Set("Authorization", "Bearer xxx")

//before the real http call
resp, err := client.Do(req)

refer to:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/
https://github.com/kubernetes/client-go/blob/master/rest/request.go
https://github.com/1wpro2/nuclio/pull/1

Java being the primary language

it’s not a perfect language, but it’s the one in the lead position and continuously approaching and being the most complete, feature rich and thoughtful language, suitable for most large entreprise grade build out and even with future support, expansion and scaling lookout in mind.

Screenshot 2019-10-24 at 10.50.34 PM

https://www.jetbrains.com/lp/devecosystem-2019/

JS Frameworks

This crazy race has no winner at all since it is neverending. That’s it! Yesterday you were learning Backbone.js, jQuery, Knockout.js, Ember.js, then AngularJS and now ReactJS, Next.js, Vue.js, all Angular flavors. Today, up comes Ext JS and Aurelia, the new ones. And tomorrow another will come up. The framework array list is endless.

https://dev.to/blarzhernandez/why-you-should-learn-javascript-principles-first-not-the-hottest-frameworks-kb9

Declarative delusion

The delusion happens when developers start believing that declarative languages are better than imperative languages.

http://tutorials.jenkov.com/the-declarative-delusion.html

Personally, there is no perfect solution for all.
Nowadays, there are so many solutions/partial-solutions/broken-solutions, which for people, normally has a lazy tendacy, would like to fanatic easy to understand simply because it’s easy. But easy solution is normally suitable for special or specific domain, in some cases, its broken. Otherwise, for example, why the mostly widely used langue for enterprises is java?

ES6 destructing and spread

http://www.typescriptlang.org/docs/handbook/variable-declarations.html#destructuring

destructuring is to auto map the source to the assigned variable, whether being array or object or tuple.


const [a,b,c] = ...

const {a,b,c} = …

const [a,b,c]= …

spread is to auto map any number of items in the array to a spread variable, vice versa. basically to repsent array as one (spread) variable.


const [a, ...b] = ...
const x = [a, ...b]

and in order to destruct array of objects

let o = [
{
a: "foo",
b: 12,
c: "bar"
},
{
a: "test",
b: 20,
c: "check"
}];

let [{a}] = o ====> “foo”

let [{a: a1},{a: a2}] = o ====> “foo”, test

publish vscode extension with bundled in images/gif

its by default vscode expect a public git repository for referring the code and even images to be shown in the markdown.

an alternative, workable solution to enable for the bundled in images is through publish a pre-packaged vsix.

You can override that behavior and/or set it by using the –baseContentUrl and –baseImagesUrl flags when running vsce package. Then publish the extension by passing the path to the packaged .vsix file as an argument to vsce publish.

https://code.visualstudio.com/api/working-with-extensions/publishing-extension

: in typescript destructing

tsc keep complaning about this code of implicit any type:

const updateFile = (type: any, { fsPath: filepath }) => {
.....
}

turns out the : within the destructing is for aliasing.

so the code need to change to

const updateFile = (type: any, { fsPath: filepath }: {fsPath: any}) => {
...
}

quote:

As you can see, component is implicitly any. The : inside of your destructuring assignment in the function arguments is aliasing rather than functioning as a type signature. You could correct it by doing the following:

不要浪费时间写完美代码(你认同么?)

An interesting read: clean code vs perfect code vs pragmatic coding vs refactoring.

I guess one point missing from the article is: It’s easier for a good developer to write a near perfect coding than an average developer to write clean code.

And the dilemma is, without advocating for high quality coding, developer’s skill wont improve.

Utlimately, it’s the coding skills.

https://mp.weixin.qq.com/s/YTLn8EaDfojS9s5rgbw2ow

http://swreflections.blogspot.com/2014/11/dont-waste-time-writing-perfect-code.html?m=1

https://martinfowler.com/articles/workflowsOfRefactoring/#tension