spring aop annotation

when annotating an advice method with the pointcut, spring throws an error

error Type referred to is not an annotation type: LogCommands

Turns out spring is picking up the annotation name and using it for reflection and then load the interface for AOP stuff.

So the solution is to mark the type as complete full interface name, (unless its within same package)

for example, instead of

    public void logCommands(JoinPoint joinPoint) throws Throwable {
        String command = joinPoint.getSignature().getName();
        String input = Arrays.stream(joinPoint.getArgs()).toString();

a FQN make it work

public void logCommands(JoinPoint joinPoint) throws Throwable {
    String command = joinPoint.getSignature().getName();
    String input = Arrays.stream(joinPoint.getArgs()).toString();

gradlew bootRun with spring boot

the default `gradlew bootRun` command is not always working out. The latest error message I got is

as an alternative, the plain build works, even though with a caveat:

caveat: the built out package has been put under libs folder, instead of what the bootScripts is expecting (lib). So point to the right path will work.

PS C:\Users\...> .\build\bootScripts\cli.bat
Error: Unable to access jarfile C:\Users\...\build\bootScripts\..\lib\cli-0.0.1-SNAPSHOT.jar

PS C:\Users\...\build> java -jar .\libs\cli-0.0.1-SNAPSHOT.jar

2020-08-23 10:08:55.583  INFO 53436 --- [           main] : Starting Application on ....
2020-08-23 10:08:56.493  INFO 53436 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2020-08-23 10:08:56.580  INFO 53436 --- [           main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 72ms. Found 3 JPA repository interfaces.
2020-08-23 10:08:57.191  INFO 53436 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2020-08-23 10:08:57.823  INFO 53436 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2020-08-23 10:08:57.864  INFO 53436 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]

clear bad state pods

kubectl get po -A | grep Terminating | awk '{ system("kubectl delete po " $2 " -n " $1) }'

force delete

kubectl get po -A | grep CrashLoopBackOff | awk '{ system("kubectl delete --grace-period=0 --force po " $2 " -n " $1) }'

all non-running state

kubectl get pods --field-selector=status.phase!=Running -A | awk '{ system("kubectl delete --grace-period=0 --force po " $2 " -n " $1) }'

hot reload apollo gateway

it’s not unusual that there are cases we have new graphql servers to be added, or existing graphql servers updated.

Apollo server at the moment, have a gateway component which could statically maintain and route the right traffic to the right server and aggregate results.

However, this community version of Apollo server and gateway is not able to cater for the case in the beginning, where we would like to hot update/reload the gateway with any updates however with shut down the servers.

After some trials and errors, this is the work around to sort this out:

instead of hardcode the service list (apollo graphql servers), maintain it separately and dynamically:

#### instead of this
const gateway = new ApolloGateway({
  serviceList: [
    { name: "astronauts", url: "http://localhost:4002" },
    { name: "missions", url: "http://localhost:4003" }

## maintain this list
#### instead of this
const dynamicServiceList = [
    { name: "astronauts", url: "http://localhost:4002" },
    { name: "missions", url: "http://localhost:4003" }

then from apollo gateway, instead of spoon feeding the list, switch it to dynamically load the definitions

const gateway = new ApolloGateway({
  serviceList: [],
  experimental_updateServiceDefinitions: async (serviceList) => {
    return {
      isNewSchema: true,
      serviceDefinitions:  await Promise.all(dynamicServiceList.map(async (service) => {
//load the sql
        const data =  await request(service.url, sdlGQL);
        return {
//then feed the data here
          name: service.name,
          url: service.url,
          typeDefs: parse(data._service.sdl)

At the same time, create a new endpoint, if needed, to take in updated servers, or new servers, so that to update the dynamic list

app.get('/hot_reload_schema/', async (req, res) => {
//get the new server info from req
      const status = await validateSDL(name, url);
      if (status){
//update the dynamic list
        dynamicServiceList.push({name, url});
        res.send('The new schema has been successfully registered');
      else {
        res.send('Please check the log for the error details while registering this schema');

lastly, make sure the gateway is polling the service list at some intervals:

new ApolloGateway({
  pollingTimer: 100_000,
  experimental_pollInterval: 100_000,

alternatively, this could also be updated by an explicit load:

new ApolloGateway({

have contributed the solution to the community here.

commonJS vs ES Module

for nodeJS, the server side is by default using commonJS for moduling.

//export data data.js
exports.staffs = [{
   name: "Bush",
   id: 1,
   name: "Forest",
   id: 2

//default export from module, default.js
module.exports = {name: "Gump"};

//then corresponding import
const { staffs, .... } = require('./data')
const anything = require('./default')

while client side is using ES modules:

export const log = winston.createLogger({....});

import log from './logging/logger';

infinite redirects with lua-resty-openidc

lua-resty-openidc is a certified OIDC and OAuth library built onto openresty. While openresty is a reverse proxy built on nginx with lua and luaJit embedded, which greatly upgrade nginx’s capability.

lua-resty-openidc is able to authenticate and authorize the client with compliant OP (keycloak in my case). However, I was facing issues with infinite redirects:

    location /test {

      access_by_lua_block {

        local opts = {
          discovery = "http://keycloak/...../.well-known/openid-configuration",
          redirect_uri_path = "/test",
          accept_none_alg = true,
          client_id = "xxxx",
          client_secret = "xxxxx",
          use_nonce = true,
          revoke_tokens_on_logout = true,

        local res, err, url, session = require("resty.openidc").authenticate(opts)

        if err or not res then
        ngx.status = 403
        ngx.say(err and err or "no access_token provided")
      default_type text/html;
      content_by_lua 'ngx.say("<p>hello, world here from test</p>")';

for above block, I was expecting the library able to direct the client to keycloak authentication at first time, then subsequently redirect back to the redirect_uri /test, which then see the client is already authenticated, and proceed to the content_by_lua` block.

however, instead, it’s facing a infinite redirect between keycloak and redirect_url: https://github.com/zmartzone/lua-resty-openidc/issues/32#issuecomment-656035986

the final solution is to put the control block (access_by_lua) after location /, then worked out


a follow up to the original post, the redirect_uri itself could be causing the issue. Instead of pointing it to a final landing page, point it to a intermittent place which would then be directed to the original place (the protected location) should sort the problem as well.


issue with embedded code in google site

I was facing some issue when embedding the code into google site:

turns out google was possibly running some checks, maybe the traffic while the code is added, and with some activities from any add-on or chrome itself, google site think it’s suspicious and blocked it.

the solution was to run chrome in incongnito mode, it will work fine.