Build a chrome extension to resume the browsing experience revolution

Build a chrome extension to upgrade browsing experience

 

View at Medium.com

 

 

 

However, here is a quick summary of the steps:

Create a javascript project

This is no requirements on the frameworks, developers can choose whichever prefered either plain javascript, react, angular or vue. I have been building extensions with jquery, angular and react. As long as the final package can be rendered as normal web page, it will work with chrome extension.

Set up the manifest

We need to have the manifest set up. Here are some of the configurations:

most importantly, we need to have the background, popup, options and content script configured as needed. To invoke the extension by keyboard, the browser and page actions need to be set up.

Build and Test out

You can load the extension locally using the `developer mode`

Note: if you are using react and react-scripts for packaging, you need to set up for the CSP policy:

Publish to the market

You need to create a developer account and publish using the chrome developer console.

 

 

View at Medium.com

Poetry dependency update

When updating the dependency version in pyproject.toml, for example

[tool.poetry.dependencies]
python = “^3.8.1”
streamlit = “^0.51”. ## ==> update to 0.56

If there is already existing a poetry.lock, poetry would throw

(streamlit-base) jackie@jackie streamlit-base % poetry install
Installing dependencies from lock file

[NonExistentKey]
‘Key “hashes” does not exist.’

install [–no-dev] [–dry-run] [-E|–extras EXTRAS] [–develop DEVELOP]

The reason being the hashes in the lock file doesn’t match the content with pyproject.tom.

The way to sort out this is to remove the lock file, then do the install again to generate a new file:

poetry install

SideCar in Kubernetes Cluster

Service Mesh is a design to provide an infrastructure as a service layer within the cloud service. Sidecar is one example of such implementation.

SideCar as its name is acting as a decoupled component attached to other microservices to cater to those cross-cutting concerns. One example is a volume mount across microservices.

As mentioned in my other post, mount S3 as share drive, it’s a great feature to be able to access the S3 for CRUD operations. Some common use would embed the volume mount into each container or pod whichever needs the S3 access.

However, there is a security concern with this.

The ACL needed to mount the FUSE drive is `SYS_ADMIN` at a minimum. So to run this as a single container, we need to provide:

docker run --rm -it --cap-add SYS_ADMIN s3-sidecar bash

to run it with docker-compose:

s3-sidecar:
restart: on-failure
image: s3-sidecar
init: true
build:
context: s3-sidecar
target: dev
environment:
- DEPLOYMENt=STAGING
privileged: true
cap_add:
- SYS_ADMIN # This is needed for mounting the volume

to run it in Kubernetes:

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
selector:
matchLabels:
app: s3-sidecar
template:
metadata:
creationTimestamp: null
labels:
app: volume-provider
spec:
containers:
- image: {{ .Values.aws.env }}s3-sidecar
imagePullPolicy: IfNotPresent
name: s3-sidecar
volumeMounts:
- name: s3
mountPath: /s3
mountPropagation: Bidirectional
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
restartPolicy: Always
status: {}

all these would expose the uplifted privilege to the container and pod. With this access, there are ways some experienced developers would be able to bypass the designated location /s3 for example in the above pod and write or delete other files within the VFS.

Examples:

https://www.exploit-db.com/exploits/47147

https://kubernetes.io/blog/2018/04/04/fixing-subpath-volume-vulnerability/

Unless the permission required for FUSE mounting is corrected, it’s important to segregate the component doing this mounting.

The implementation of this segregation is sidecar. So instead of embedding the mounting into each container or pod, we will create a dedicated sidecar to do the single point mounting. We will have different security control for this single component to disable it being exposed or exploited. While at the same time, for those containers or pods need to access S3, they will just have read-only access into the sidecar volume.

Here is the implementation:

For the sidecar, it will be provided with the needed permission `SYS_ADMIN` to run the mounting.

Note: the mountPropagation should be Bidirectional, as we need the access to be able to update the content back into S3

kind: Service
apiVersion: v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
ports:
- port: 8000
targetPort: 8000
selector:
app: s3-sidecar
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: s3-sidecar
name: s3-sidecar
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: s3-sidecar
template:
metadata:
creationTimestamp: null
labels:
app: volume-provider
spec:
containers:
- image: {{ .Values.aws.env }}s3-sidecar
imagePullPolicy: IfNotPresent
name: s3-sidecar
resources: {}
env:
- name: DEPLOYMENT
value: -{{ .Values.deployment }}
volumeMounts:
- name: s3
mountPath: /s3
mountPropagation: Bidirectional
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
restartPolicy: Always
volumes:
- name: s3
hostPath:
path: /s3
status: {}

for each individual container and pod need the access:

vol = client.V1Volume(
name="s3",
host_path=client.V1HostPathVolumeSource(path="/s3"),
)
s3 = client.V1VolumeMount(
name="s3",
mount_path="/s3",
mount_propagation="HostToContainer",
read_only=True
)
client.AppsV1Api().create_namespaced_replica_set(
...
V1ReplicaSet(
...
spec=V1ReplicaSetSpec(
...
template=V1PodTemplateSpec(
... spec=V1PodSpec(
volumes=[vol],
containers=[
V1Container(
... volume_mounts=[s3],
image_pull_policy="IfNotPresent",
)
],
),
),
),
),
)

Note: we will limit the mountPropagation to HostToContainer. So that if write or update to the mount place or sub directory, they will be available in these containers. However, these containers shouldn’t propagate contents back into S3.

This is the topology:

pipenv

issue with pipenv

(base) ➜ lib git:(develop) ✗ python –version
dyld: Library not loaded: @executable_path/../.Python
Referenced from: /Users/jackie/.local/share/virtualenvs/streamlit-GdDAcdiW/bin/python
Reason: image not found
[1] 81477 abort python –version

(base) ➜ lib git:(develop) ✗ ..
dyld: Library not loaded: @executable_path/../.Python
Referenced from: /Users/jackie/.local/share/virtualenvs/streamlit-GdDAcdiW/bin/python
Reason: image not found

(base) ➜ streamlit git:(develop) ✗ make all-devel

dyld: Library not loaded: @executable_path/../.Python
Referenced from: /Users/jackie/.local/share/virtualenvs/streamlit-GdDAcdiW/bin/python
Reason: image not found

solution:
1. reinstall pipenv

(base) ➜ streamlit git:(develop) ✗ brew uninstall pipenv
Uninstalling /usr/local/Cellar/pipenv/2018.11.26_3… (1,483 files, 21.3MB)

(base) ➜ streamlit git:(develop) ✗ brew install pipenv

  1. clear old virtualenv

    (base) ➜ lib git:(develop) ✗ rm -rf pipenv --venv
    (base) ➜ lib git:(develop) ✗ pipenv --venv
    No virtualenv has been created for this project yet!
    Aborted!
    (base) ➜ lib git:(develop) ✗ pipenv --rm
    No virtualenv has been created for this project yet!
    Aborted!

  2. recreate new env with specific version (exist on local OS)

    (base) ➜ lib git:(develop) ✗ pipenv --python 3.7
    Warning: the environment variable LANG is not set!
    We recommend setting this in ~/.profile (or equivalent) for proper expected behavior.
    Creating a virtualenv for this project…

nvm

commands to list current installed node version:

(base) ➜ streamlit git:(develop) ✗ nvm ls
v10.8.0
-> system
default -> 10.8.0 (-> v10.8.0)
node -> stable (-> v10.8.0) (default)
stable -> 10.8 (-> v10.8.0) (default)
iojs -> N/A (default)
unstable -> N/A (default)
lts/* -> lts/erbium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.18.1 (-> N/A)
lts/erbium -> v12.14.1 (-> N/A)

to switch different versions:

(base) ➜ streamlit git:(develop) ✗ nvm use system
Now using system version of node: v12.16.0 (npm v6.13.4)

to set default version

(base) ➜ streamlit git:(develop) ✗ nvm alias default system
default -> system
(base) ➜ streamlit git:(develop) ✗ nvm ls
v10.8.0
-> system
default -> system
node -> stable (-> v10.8.0) (default)
stable -> 10.8 (-> v10.8.0) (default)
iojs -> N/A (default)
unstable -> N/A (default)
lts/* -> lts/erbium (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.18.1 (-> N/A)
lts/erbium -> v12.14.1 (-> N/A)

Kubernetes ports

There are three kinds of ports widely used in kubernetes resource management, for example:
port is the port exposed within the cluster. so other nodes/pods from same cluster can access it through service:port
targetPort is the port exposed from within the pod.
nodePort is the port exposed to outside world. so on public network, it could be accessed as public-ip:nodePort


kind: Service
apiVersion: v1
metadata:
labels:
app: nginx
name: nginx
spec:
ports:
- port: 443
targetPort: 443
- nodePort: 31234
selector:
app: nginx

Mount S3 as share drive

Detailed steps to mount the S3 bucket as share drive

AWS S3 is a popular choice nowadays for cloud storage. As Amazon claims:

Amazon S3 is designed for 99.999999999% (11 9’s) of data durability because it automatically creates and stores copies of all S3 objects across multiple systems.

There are common needs we would like to mount the cloud drive (as a shared drive or local disk). As such we can access the cloud drive (S3 bucket here) from other cloud services or even our local OS. It will be super convenient for viewing, writing and updating files in the cloud.

The tool I have leveraged here for mounting is called s3fs, which is to mount the S3 as the FUSE file system.

Here are the steps to achieve the state:

Create the S3 bucket

As a preliminary, we need to have the S3 bucket created.

There are two ways to create the bucket, either from the web console or using the AWS CLI.

Some prefer to use the web console, as it’s quite intuitive to access the console and create the bucket there. Most of the cases it’s just to click the create bucket button then follow the steps:

You will be able to review and make changes if needed before the bucket is created:

most of the cases you would like to Block all pubic access.

Note, when you create the bucket using the console, you need to select the region. Make sure you choose the region as close as possible physically to your server/computer location where you would access the bucket. These are the regions and corresponding codes at the moment of writing:

However, once the bucket created, you will note that:

Don’t get confused, as the S3 console will display all buckets regardless of regions. You can find the actual region the bucket is residing in the corresponding column.

Alternatively, if you are more familiar with the AWS CLI, you might like to create the bucket from the command line.

aws s3api create-bucket --bucket data-bucket --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1

you would need to have your AWS CLI set up properly before you are able to run the command:

aws configure

AWS Access Key ID [None]:

AWS Secret Access Key [None]:

Default region name [None]: us-west-2

Default output format [None]: json

after this, it will generate a valid config and credential file

~/.aws/config

~/.aws/credentials

set up the proper access

After the bucket is created, you need to set up the proper access. There are two approaches to set up for S3 access, either using s3 policy or through IAM policy.

Personally I think the IAM policy should be the defacto place for controlling most of the AWS resources access. The reason being it’s decoupled. It’s specifically for access control alone, while not tied to any specific role or buckets. While at the same time, S3 policy is bucket specific, which would work if your bucket is eternal.

However, you have both choices, you can make your own judgment here.

For IAM policy, you need to create the role, then associate the principal/person who needs to access the bucket with the role:

Note: you need to grant these access to the bucket

“s3:ListBucket”,

“s3:PutObject”,
“s3:GetObject”,
“s3:DeleteObject”

so in the policy, it would be something like this:

for the object access, make sure you grant to both resources

arn:aws:s3:::bucket
arn:aws:s3:::bucket/*

you can use the AWS simulator for confirming your set up is correct with the access:

https://policysim.aws.amazon.com/home/index.jsp?#

alternatively, you can grant specific S3 bucket access:

install s3fs either from source or package

after you have the S3 created with the proper access, you can now proceed with the installation of the s3fs.

You can either install the package directly, for example:

#ubuntu

sudo apt install s3fs #from source

Alternatively, in case you would like to build from source with any customization:

git clone https://github.com/s3fs-fuse/s3fs-fuse.git cd s3fs-fuse ./autogen.sh ./configure make sudo make install

mount by role

Now with the bucket and s3fs installed, you can do the real mounting:

mkdir /mnt-drive && s3fs -o iam_role=”role-from-step-2” -o allow_other S3-bucket /mnt-drive

in most cases, especially if you would like to access the mount from other cloud services, you need to mount it by role. Normally those roles are tied to the cloud resources.

for example, if you would like to access it from an EC2 instance, you just need to grant the role to that EC2 instance would work.

mount by key

In most cases, you might not have the access key, as this normally it’s owned by system admin. But just in case if you do have, you can mount it using your AWS access keys:

echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs

mkdir /mnt-drive && s3fs -o passwd_file=${HOME}/.passwd-s3fs -o allow_other S3-bucket /mnt-drive

Till here, you would be able to access your S3 bucket from your mounted place:

FTP

in addition, in case you would like to expose the mount place as a FTP server:

install vsftpd

systemctl start vsftpd

then you can access it from your FTP client

python logs within kubernetes

by default, the logs won’t print out from the kubernetes pod. this is because Kubernetes would buffer the stdin, stdout, stdout.

the way to sort this out is make

PYTHONUNBUFFERED

If this is set to a non-empty string it is equivalent to specifying the -u option

in helm chart, something like this

kind: Service
apiVersion: v1
metadata:
  labels:
    app: example
  name: example
spec:
  ports:
  - port: 5000
    targetPort: 5000
  selector:
    app: example
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: example
  name: example
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: example
    spec:
      containers:
      - image: example
        imagePullPolicy: IfNotPresent
        name: example
        ports:
        - containerPort: 443
        resources: {}
        env:
          - name: PYTHONUNBUFFERED
            value: "0"
      restartPolicy: Always
status: {}

docker privilege

 

the -privilege is powerful yet dangerous:

https://blog.trendmicro.com/trendlabs-security-intelligence/why-running-a-privileged-container-in-docker-is-a-bad-idea/

 

while the alternative is just to mount the sock

docker run -v /var/run/docker.sock:/var/run/docker.sock 

it’s not strictly docker-in-docker, but it should be able to serve most use cases.

options to configure AWS provider in terraform

provider “aws” {
region           = “us-east-1”
access_key  = “your-access-key-here”
secret_key   = “your-secret-key-here”
}

or point to the profile

provider “aws” {
region                                  = “us-east-1”
shared_credentials_file  = “~/.aws/credentials”   //default: “~/.aws/credentials”
profile                                  = “tf-admin”                        //default: “default”
}

Set up container runtime variable

As the image command is only run during build time, however, while running the container, we might need to access some environment or configuration variable. here is the workaround:

ARG variable=unknown ## Build time
ENV variable=${variable} ## Run time

Then to pass in the arg,

docker build--build-arg version=0.0.1

for docker-compose, then

container:
  image: image
  restart: always
  build:
    context: dockerfile
    args:
      version: ${version} ## alternatively, default a value here

for docker-compose,

${version}

can be used to retrieve the environment variable

terraform plan

been running terraform plan on some limited changes on the .tf file.

however, the plan always generate whole project as need to add, even with forced refresh

terraform refresh

 

turns out the solution is to sync the workspace. make sure, the workspace you are synching with intended.

terraform workspace list

terraform workspace select master

Tab & URL manager

Upgrade your chrome experience. Leverage on this app to handle tabs and urls navigation.
You can maximize your screen space, use Chrome in complete full screen mode by dropping off the address bar and the tab bar.

* type a URL followed by enter to visit the specified address
* type a term followed by enter to search the term (the search engine is configurable)
* type a term to navigate to any popular search suggestions
* type a keyword to navigate to any mapped URL (the map is configurable)
* type a term to navigate to any open tabs
* type a term to get the likely result you would like to visit


**Configurations**
change the search engine from the options page
configure the keyword and URL mapping

**Keyboard Shortcut:**
 
Windows: Alt+L

Mac: ⌥+L

 

https://chrome.google.com/webstore/detail/tab-url-manager/egiemoacchfofdhhlfhkdcacgaopncmi

mount AWS S3 as share drive

AWS S3 is a popular choice for cloud storage, due to its cost and stability.

It will be super convenient if we can mount the S3 bucket, so that to access it through FTP or from local OS directly.

 

Steps:

  1. create the S3 bucket
  2. set up the proper access
    1. either through s3 policy
    2. or through IAM policy
      1. if through IAM policy, need to create the role, associate the role with the policy
  3. install s3fs either from source or package
    #ubuntu
    sudo apt install s3fs
    
    
    #from source
    
    git clone https://github.com/s3fs-fuse/s3fs-fuse.git
    cd s3fs-fuse
    ./autogen.sh
    ./configure
    make
    sudo make install

     

  4. mount by role

     

    mkdir /mnt-drive && s3fs  -o iam_role=”role-from-step-2” -o allow_other S3-bucket /mnt-drive

     

  5. mount by key
    1. mkdir /mnt-drive && s3fs -o passwd_file=${HOME}/.passwd-s3fs
      -o allow_other S3-bucket /mnt-drive
  6. FTP
    1. install vsftpd

      systemctl start vsftpd