Kibana: Use new elasticsearch-js client in Elasticsearch service

Created on 23 Apr 2019  ·  27Comments  ·  Source: elastic/kibana

The elasticsearch-js client library we've been using for years is now deprecated, and the new node library has been published: https://github.com/elastic/elasticsearch-js

We should move our server-side Elasticsearch service over to use this new client. We might still need to use the legacy client at the same time for certain pieces of functionality that we can't easily switch over yet, but there should be no problem using them at the same time.

Since most code is consuming ES through callWithRequest or callWithInternalUser, swapping out the underlying client should be doable without significant changes in plugins.

Once this initial issue is wrapped up, we can then expose the new client directly and deprecate the callWithRequest and callWithInternalUser abstractions.

Subtasks

New Platform Core chore

Most helpful comment

In terms of downsides, @kobelb mentioned this possible downside to using transport.request on Slack:

Maybe it's okay? Since it's meant to be used for development purposes only when you need to play around with a new experimental endpoint. When an endpoint officially supported, we bump the library version and use type-safe library API instead.

All 27 comments

Pinging @elastic/kibana-platform

cc @delvedor

Hello! Thank you for opening this :)

You can find the entire breaking changes list here, it contains some code examples as well.
I tried at the beginning of the year integrating the new client myself, but I wasn't able to keep up with the pace of the new changes that land in Kibana. You can find all the work I did in https://github.com/elastic/kibana/pull/27051.

Please feel free to ask any question if you have doubts!

Ideas that have been discussed in various meetings, posting here so we don't lose them:

  • We could introduce the new interface alongside the legacy one. This would allow migrations to the NP to proceed without having to change anything immediately. Plugins could then migrate to the new client with they're ready and we could deprecate the old one.
  • We could power the legacy interface with the new client under the hood. @delvedor is currently working on a compatibility layer that we could leverage for this.

I think we should do both of those ideas. We should replace the implementation of callWithRequest/callWithInternalUser with the new client, which shouldn't break compatibility with existing code at all. Then we should expose the new client interface the way we want to expose it without functions in front of it. This way new code can use the client directly, existing code can migrate away from callWith* independently of their new platform migration, and really old code that is barely touched can just keep using the old stuff.

We should replace the implementation of callWithRequest/callWithInternalUser with the new client, which shouldn't break compatibility with existing code at all.

It depends™. You can easily adapt the surface API, but the errors will be different, see the breaking changes document.

I'm working on a module that makes compatible the surface API of the new client with the legacy one. I'm wondering if remap also the errors would be useful.

This way new code can use the client directly, existing code can migrate away from callWith* independently of their new platform migration,

I think having our abstraction on top of a client is still useful - we can follow our own update cycle, introduce custom logic in front of the client library, prevent monkey patching of the internals etc.

We should replace the implementation of callWithRequest/callWithInternalUser with the new client, which shouldn't break compatibility with existing code at all.

On the one hand, we can test the client library in real-world conditions. On the other hand, we have to implement a compatibility layer, that also requires some time to implement and cover with tests. @delvedor how hard would it be to finish work on https://github.com/elastic/elasticsearch-js-compatibility-mode/blob/master/index.js?
some small bits of inconsistency can pop up later, for example, https://github.com/elastic/kibana/issues/39430#issuecomment-506632604

Probably it makes sense for New platform elasticsearch service to expose 2 types of clients:

  • the current version (almost legacy compatible, except naming callAsInternalUser/callAsCurrentUser, but we can even fall back to legacy callWithInternalUser/callWithRequest). It still uses legacy elasticsearch library under the hood. The simplest option for plugins to migrate in the face of time pressure. We need to discuss how long we maintain this compatibility layer. Having plugins migrated to the New platform simplifies further refactoring anyway.
  • version with elasticsearch-js under the hood. Plugin authors may choose to migrate to it if they have time capacity. Although it will require more commitment for them to adopt code to the breaking changes.

I think having our abstraction on top of a client is still useful - we can follow our own update cycle, introduce custom logic in front of the client library, prevent monkey patching of the internals etc.

The new client has an API that allows you to extend it, and add new and custom APIs to it. doc.

client.extend('customSearch', ({ makeRequest }) => {
  return function customSearch (params, options) {
    // your custom logic
    makeRequest({ ... }) // send the request to Elasticsearch
  }
})

client.customSearch({ ... })

Regarding wrapping the client with your already existing code, I tried to tackle that in https://github.com/elastic/kibana/pull/27051, but I wasn't happy with the solution since I ended up with a mixed old and new code, which was quite confusing.

My suggestion would be to keep two separate clients, the legacy with its own wrapper, and the new one without any wrapper (or a very thin wrapper around it).
This approach has some benefits:

  • it's easy to understand if a code has been migrated to the new client or not
  • it avoids sneaky bugs, since every wrapper, even a very good one may introduce a bug that is hard to undercover.
  • the new client is designed to be extendible out of the box and thanks to the child client feature you can easily create multiple clients that share a common connection pool.

On the other hand, have two separate clients in this way probably may taker a longer time to finish the migration, but I still think that it's better, since the final result will be cleaner and less hacky.

@delvedor how hard would it be to finish work on elastic/elasticsearch-js-compatibility-mode:index.js@master?

It should be ready, but I cannot be sure it works in all cases until we try it in a codebase that is using the old client extensively.
But as said above, I would recommend to move to the new client in one time instead of doing it twice.

We have decided to punt on this a bit.

  • We will not be removing the legacy client in 8.0. With all plugin teams focused on migrating to the Kibana Platform plugin system, requiring this change would be a further wrinkle in that migration without much direct benefit.
  • We will introduce the new client alongside the legacy client once Platform Migration work is complete. All new development should be against this new client.
  • We will remove the legacy client in a future 8.x version, as a breaking change to plugin developers.

@joshdover I'm curious about your second point "once Platform Migration work is complete". When is this the case? We would like to already use the new client today in the package manager to for example load ILM policies. The new client has nice API's for this (https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html#_ilm_putlifecycle) but couldn't find it in the legacy one. We could work around this by writing the request ourself but would be nicer to directly use the new client. Is this possible?

@ruflin At the very earliest, I think this could get prioritized in 3-5 months when the Platform team is not 100% focused on supporting plugin migrations to the Platform. The Platform migration is the top priority for the entire Kibana project and a blocker for 8.0 so we just can't justify spending time on much else right now. We also don't want to rush adding it now and not doing it in the right way.

In the meantime, I suggest using the transport.request method which allows for custom requests while still using all the authentication machinery. There are many examples of this in the repository where we use some of the under-the-hood xpack endpoints.

Hopefully the new Elasticsearchjs client will give us more useful stack traces so that adopting it will fix https://github.com/elastic/kibana/issues/60237

We would like to already use the new client today

Same goes for APM! As mentioned above, it would be great if the new client was made available alongside the old client, so plugins that have the bandwidth now can migrate and handle the breaking changes. I'm thinking something like this can be done already for 7.9:

const {
    callAsCurrentUser,
    callAsInternalUser,
    callAsCurrentUserNext,
    callAsInternalUserNext    
  } = context.core.elasticsearch.dataClient;

Next steps could be to suffix callAsCurrentUser / callAsInternalUser with Legacy and make callAsCurrentUserNext / callAsInternalUserNext the default (dropping Next suffix).

(apologies if I'm repeating what you are already planning to do. Just super excited about the prospect of moving to the new client)

After taking a look at the breaking changes (https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/breaking-changes.html), and playing a little with the new API, I'm going to open the discussion:

Pain points from the changelog

this may be non exhaustive

the internal logger is no more

There is no longer an integrated logger. The client now is an event emitter that emits the following events: request, response, and error.

Estimated impact: medium

We can no longer pass a logger class using ElasticsearchClientConfig.log. This also impacts the way ElasticsearchClientConfig.logQueries works.

If we want to preserve the logQueries functionality (and the internal logging) we could just add listeners for the new emitted events from the client.

Main issue is more with the ElasticsearchClientConfig.log option, as if we want to preserve a way for consumers of esService.createClient to use a custom logger, we will have to expose a new ClientLogger interface.

A very 'raw' version could looks like (directly using the library types):

interface ClientLogger {
  onRequest(error: ApiError, request: RequestEvent);
  onResponse(error: ApiError, request: RequestEvent);
  onSniff(error: ApiError, request: RequestEvent); // useful?
  onResurrect(ResurrectEvent); // useful?
}

We could also split the handlers between onRequestSuccess + onRequestError and so on.

We could also just decide to remove the ElasticsearchClientConfig.log option, as it would be significantly easier to not expose an API and just handle logQueries's logger internally.

Another important detail is that the new client no longer logs, or have an emitted event, for warnings, that are now returned in the response body, or/and available in the response emitted events.

More infos on client observability: https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/observability.html

No backward compatibility

The code is no longer shipped with all the versions of the API, but only that of the package’s major version. This means that if you are using Elasticsearch v6, you are required to install @elastic/elasticsearch@6, and so on.

Estimated impact: minor

I think this shouldn't be an issue as we are supposed to be on the same major as the stack anyway. That does deprecates the elasticsearch.apiVersion config property though, which would only be used by the legacy client.

No more specific response errors

Errors: there is no longer a custom error class for every HTTP status code (such as BadRequest or NotFound). There is instead a single ResponseError. Every error class has been renamed, and now each is suffixed with Error at the end.

Removed errors: RequestTypeError, Generic, and all the status code specific errors (such as BadRequest or NotFound).

Estimated impact: low

Not directly impacting the new client implementation much, but the consumers will need to adapt to use error.statusCode instead when checking for specific errors.

We could, if we want, provide a label->code mapping

Params are split between params and transport options

There is a clear distinction between the API related parameters and the client related configurations. The parameters ignore, headers, requestTimeout and maxRetries are no longer part of the API object and you need to specify them in a second option object.

client.search(searchOptions, { headers, maxRetries });

Estimated impact: minor

The transport-related options (such as headers, ignore and maxRetries) are now in a distinct second TransportRequestOptions parameter. Impact should be minimal though, as it only impact the ClusterClient internal implementation.

The plugin API changed

The plugins option has been removed. If you want to extend the client now, you should use the client.extend API.

// before
const { Client } = require('elasticsearch')
const client = new Client({ plugins: [...] })

// after
const { Client } = require('@elastic/elasticsearch')
const client = new Client({ ... })
client.extend(...)

This is a breaking change for createClient consumers that used plugins, as they will need to adapt their usages.

Also, as plugins is not logger a client constructor option, if we want to preserve support of custom plugins, we will need to add an API to allow the createClient consumer to have access to the client.extend method - directly or by wrapping it.

More info on client extension here: https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/extend-client.html

We could add an configure option to ElasticsearchClientConfig, that could be used like

esStart.createClient('type', {
  configure: (extend) => {
    extend('my.extension', extendsCallback);
    extend('my.other.extension', extendsCallback);
  },
})

Remarks on changes in client options

  • the keepAlive option seems to have disappeared, but seems to be the default. ElasticsearchClientConfig.keepAlive is probably useless now

  • the host and hosts options have been replaced by the node and nodes. It's hard to say for sure as the legacy type was any, but even if the new API is less permissive in the way you define your nodes, it doesn't look like there are any breaking change.

  • the ssl options is now properly typed. All the options we were using seems to be unchanged.

Questions for the new client/service API

Do we want/need a new version of the APICaller

To which extent do we want to reproduce the APICaller interface we are currently using for the legacy client? Is there any valid reason to do so? Should we leverage/mimic the underlying client API instead?

client.callAsCurrentUser('cat.count', params)

vs

// client.asCurrentUser() / client.asScoped(request) would return an interface very close to `@elasticelasticsearch`'s `Client`
client.asCurrentUser().cat.count(params);
client.asScoped(request).cat.count(params)

The advantage of the APICaller was to allow to add additional options (CallAPIOptions). However there are only two:

  • wrap401Errors

This was used to convert 401 to boom errors. As we will no longer wrap ES error to boom errors in the new ES service, I think this is now useless.

  • signal

Was used to allow request abortion. However with the new API (I think that was already the case with the old one), all return promises provides a abort() function:

const req = client.search();
req.abort();

Note that even with an API close to the Client, we would still need a wrapper class anyway, to inject the headers for asScoped for example, so we would not loose any capability compared to the old client. So this is more of an API design question.

How should we expose errors returned from the client

The new es Client (at least in its promise-based API) is throwing errors in case of call failure, as the legacy client was.

Is this how we want to expose the errors to the ClusterClient consumers, by letting them try await/catch or do we want something different, such as an Either<Response, Error> monad?

signatures like

search<
  TResponse = Record<string, any>, 
  TRequestBody extends RequestBody = Record<string, any>, 
  TContext = unknown>(
params?: RequestParams.Search<TRequestBody>, 
options?: TransportRequestOptions)
: TransportRequestPromise<ApiResponse<TResponse, TContext>>

could be exposed in our wrapper as

search<
    TResponse = Record<string, any>, 
    TRequestBody extends RequestBody = Record<string, any>, 
    TContext = unknown> (
  params?: RequestParams.Search<TRequestBody>, 
  options?: TransportRequestOptions)
// AbortablePromise would be our version of TransportRequestPromise
: AbortablePromise<ApiResponseOrError<ApiResponse<TResponse, TContext>>>

with ApiCallRequestPromise being something similar to (can also be

type ApiResponseOrError<ResponseType> = {
  success: boolean // or isError + isResponse 
  response?: ResponseType;
  error?: ApiError // error type that is thrown by the client
}

Do we need to wrap / convert the response from the client

All the new client APIs are returning the same ApiResponse type, which is:

export interface ApiResponse<TResponse = Record<string, any>, TContext = unknown> {
  body: TResponse;
  statusCode: number | null;
  headers: Record<string, any> | null;
  warnings: string[] | null;
  meta: {
    context: TContext;
     // [...] meta infos such as the requestId, the used connection, number of attempts used.
  };
}

Should our ClusterClient API returns the return value from the underlying client as-is, or do we want to just return the body attribute as it is done in the legacy client ?

Bonus question: Should we remove ClusterClient altogether.

We could just get rid of our ClusterClient wrapper, and just expose APIs to access preconfigured es Client

interface ElasticSearchServiceStart {
  getInternalClient: () => Client,
  getScopedClient: (request) => Client,
}

Consumers would just have access and manipulate a 'raw' elasticsearch client.

Biggest pro is probably that our implementation would be way thinner. Cons is that appart from providing custom options during the client creation, we won't have much control over the client once it's created.

We could use the child API to be sure that all clients share the same connexion pool.

Legacy client renaming

The legacy client currently lives in src/core/server/elasticsearch. How should we handle the code separation between the two?

Suggestion:

  • move all the legacy client related files to src/core/server/elasticsearch/legacy
  • prefix all the legacy client related types with Legacy (LegacyClusterClient, LegacyElasticsearchClientConfig)
  • (?) flag all the legacy types as deprecated

This could/should probably be done in an isolated PR.

The plugin API changed

We could add an configure option to ElasticsearchClientConfig, that could be used like

We should validate whether or not we will even need the plugin/extend API. The new client is supposed to support all HTTP REST APIs in Elasticsearch, so we may be able to get away with not needing this (at least in Kibana plugins). One area we may still need this is for interfacing with the Kibana system index, which will have it's own HTTP routes. However, I believe that should all be Core-only usages.

Do we want/need a new version of the APICaller

To which extent do we want to reproduce the APICaller interface we are currently using for the legacy client? Is there any valid reason to do so? Should we leverage/mimic the underlying client API instead?

I lean towards using the underlying client API as much as possible. If we do have a need for additional options, I think we can still support that in our wrapper and/or some extension of the base client interface.

There are some drawbacks with how TypeScript handles method overloading which we need to support different return types on the APICaller interface. One such drawback is that using conditional or mapped types (eg. ReturnType<APICaller>) with an overloaded function signature always pulls the most generic signature rather than something more specific and useful (eg. ReturnType<Client['cat']['indices']>)

How should we expose errors returned from the client

The new es Client (at least in its promise-based API) is throwing errors in case of call failure, as the legacy client was.

Is this how we want to expose the errors to the ClusterClient consumers, by letting them try await/catch or do we want something different, such as an Either<Response, Error> monad?

This decision should probably be driven by a higher-level convention on API design within Kibana services. Would be a good topic to bring up in our monthly syncs with @elastic/kibana-app-arch and/or as a separate discuss issue. There are certainly tradeoffs with both directions that should be weighed and discussed.

I worry about introducing a new pattern that will take time to propagate to all other services without the bandwidth to make it happen quickly.

Do we need to wrap / convert the response from the client

Should our ClusterClient API returns the return value from the underlying client as-is, or do we want to just return the body attribute as it is done in the legacy client ?

I lean towards exposing more data to plugins. Specifically, I can think of a handful of cases where having access to the response headers would have been useful.

Bonus question: Should we remove ClusterClient altogether.

We could just get rid of our ClusterClient wrapper, and just expose APIs to access preconfigured es Client

I lean towards no. I believe there are benefits we get from having a thin wrapper. Specifically, I'm thinking about audit logging & out-of-the-box request tracing (Incoming HTTP request -> Elasticsearch request via the X-Opapque-Id header).

Legacy client renaming

These suggestions make sense to me 👍

We should validate whether or not we will even need the plugin/extend API. The new client is supposed to support all HTTP REST APIs in Elasticsearch, so we may be able to get away with not needing this (at least in Kibana plugins).

ES sometimes adds new APIs which the ES UI team needs to develop UIs for during a dev cycle (data streams being a current example). In this situation, it doesn't seem realistic to expect the ES JS client to stay up-to-date with evolving APIs. And if it did, incorporating it into our dev process seems cumbersome, since I assume we'd want to temporarily update package.json to consume an ElasticsearchJS feature branch instead of a release. We typically extend the legacy client with these in-development APIs.

So, if all of the above makes sense, then I think this is justification for offering an extend API.

So, if all of the above makes sense, then I think this is justification for offering an extend API.

Sounds like a good reason to me, thanks for weighing in! 👍

the internal logger is no more
Main issue is more with the ElasticsearchClientConfig.log option, as if we want to preserve a way for consumers of esService.createClient to use a custom logger, we will have to expose a new ClientLogger interface.

This log option has been the only way to customize the logging format & destination. KP logging system already supports context-specific configuration. The only not-covered use-case might be when you want to log plugin-specific ES requests. I think such enhancement could be done on the platform level (attach the current caller context, for example). Btw, I didn't manage to find any usages of custom log in the repo. I'd rather remove this log option and deprecate logQueries.

The plugin API changed
The plugins option has been removed. If you want to extend the client now, you should use the client.extend API.

So, if all of the above makes sense, then I think this is justification for offering an extend API.

Correct me if I'm wrong, but plugins leverage custom plugins functionality to extend clients with custom methods (client actions). If a user needs to request a new experimental endpoint, they could use transport.request without new client instantiation. @cjcenizal

Do we want/need a new version of the APICaller
client.asCurrentUser().cat.count(params);

My +1 for the newer version: autocomplete, type-safety, easier supported method discoverability.

wrap401Errors

I cannot find any usages in the existing code-base. Is it still a relevant API? @elastic/kibana-security

How should we expose errors returned from the client

I agree with Josh. It might require even RFC to get input from a broad audience.

Do we need to wrap / convert the response from the client

Let's not repeat the same mistake that has been done in the fetch API. Headers/warnings, etc. are important pieces of response, and a user must have access to them.

Bonus question: Should we remove ClusterClient altogether.

interface ElasticSearchServiceStart {
  getInternalClient: () => Client,
  getScopedClient: (request) => Client,
}

Client and ScopedClient aren't interchangeable. It's easy to make a mistake making a call on the wrong client. We will have to add an explicit type ScopedClient to prevent such problems. For me, it sounds like we move complexity from the core to the plugin's code, so I'd rather keep the existing API.

Legacy client renaming

+1

Correct me if I'm wrong, but plugins leverage custom plugins functionality to extend clients with custom methods (client actions). If a user needs to request a new experimental endpoint, they could use transport.request without new client instantiation.

Based on experience and some discussion I've seen on Slack, my understanding is that transport.request and custom plugins are different ways of essentially doing the same thing. I believe custom plugins use transport.request under the hood. Do you know if transport.request is supported by the new JS client?

I'm not entirely sure about how they compare in terms of tradeoffs. In terms of downsides, @kobelb mentioned this possible downside to using transport.request on Slack:

the original issue was that it leaves it up to the consumer to call encodeURIComponent when building URLs which contain user input

it also just makes it harder to track specific usage of ES APIs

I also found this PR (https://github.com/elastic/kibana/pull/68331/files) which highlights a mistake that's easy to make and difficult to discover when using transport.request, by accidentally leaving off the leading slash in the path.

Do you know if transport.request is supported by the new JS client?

It is.

const client = new Client();
// request(params: TransportRequestParams, options?: TransportRequestOptions): Promise<ApiResponse>;
client.transport.request(/* */);

Also note that the new ES client is supposed to have an exhaustive list of ES endpoint/commands. So I'm not even sure we would really have any need of using this, but I did not do an audit of current usages of custom plugins in our codebase. Does anyone have a better vision than me on that. @kobelb maybe?

In terms of downsides, @kobelb mentioned this possible downside to using transport.request on Slack:

Maybe it's okay? Since it's meant to be used for development purposes only when you need to play around with a new experimental endpoint. When an endpoint officially supported, we bump the library version and use type-safe library API instead.

That SGTM. I feel like we can wait and see if we really need to implement plugin support for the new client, as transport.request is an acceptable workaround (or even alternative).

@delvedor Can new https://github.com/elastic/elasticsearch-js client work in browser env? I see this in the official repo https://github.com/elastic/bower-elasticsearch-js#submit-issues-pull-requests-etc-to-elasticsearch-js

@elastic/kibana-app-arch Is there a specific reason why we need to use elasticsearch-browser?
https://github.com/elastic/kibana/blob/78ebb6250a47047d88e3f0c94c9545f633f7c454/src/plugins/data/public/search/legacy/es_client/get_es_client.ts#L21
Can we get rid of it on the client side? Data plugin proxies request to the server-side where the new ES client will be available anyway.

@restrry the legacy client was compatible with the browser environment, the new one is not.
It could work, but it's not tested nor supported.
The reason is that we want to discourage people from connecting to Elasticsearch from the browser, as it could easily expose them to security issues.


The new client does support the entire Elasticsearch API surface, unless an API is not defined in the json spec here and here.
If you need to add a custom API, I would recommend using client.extend instead of transport.request, so other parts of the code could benefit this addition. Also, inside an extend function you can run validation as well! The only drawback about using the extend functionality is that then you will need to do a type augmentation.
Please feel free to ping me if you feel that the client is missing a functionality or you think it should solve a problem differently :)

We want to remove any usages of the legacy browser APIs. That effort is tracked here: https://github.com/elastic/kibana/issues/55139. Once that is completed we can get rid of the dependency altogether.

Maybe it's okay? Since it's meant to be used for development purposes only when you need to play around with a new experimental endpoint. When an endpoint officially supported, we bump the library version and use type-safe library API instead.

This seems reasonable to me. I do think we should be minimizing the use of transport.request, as it's potentially dangerous, and not type-safe. However, having some option for when an Elasticsearch endpoint hasn't been added to the client seems reasonable.

One area we may still need this is for interfacing with the Kibana system index, which will have it's own HTTP routes. However, I believe that should all be Core-only usages.

Monitoring uses it to configure connection to the custom index https://github.com/elastic/kibana/blob/53b95424fec89bf503390bcafb5a62e04b28801b/x-pack/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js#L60

wrap401Errors

This option has been introduced in https://github.com/elastic/kibana/issues/9583 to inform a user that request to ES has failed due to expired credentials and they need to log in again.
I looked at the current implementation, and it seems we already have it broken for KP http endpoints from the very beginning of HTTP service in KP. Kibana Platform, by design, doesn't support throwing errors to control execution flow in runtime. Any exception raised in a route handler will result in 500 generated by Kibana HTTP service.
However, some HTTP routes manually reproduce the Hapi logic via compatibility wrapper or response.customError https://github.com/elastic/kibana/blob/fcdaed954eefc53601c53a6b6cc6c3fc843c7fa2/x-pack/plugins/spaces/server/routes/api/internal/get_active_space.ts#L24
I'm surprised that we haven't got any issues reported so far.
We need to decide whether we want to support this in KP at all. As I can see we aren't going to support use-cases with Security settings in Kibana and ES. So the next question is whether we want to support auth via intermediate proxy between Kibana and ES.
That would require handling 401 errors on the platform level:

 try {
 // call route handler
} catch (e) {
  this.log.error(e);
  if(is401Error(e)) {
    const wwwAuthHeader = get(error, 'body.error.header[WWW-Authenticate]') || 'Basic realm="Authorization Required"'
    return response.unauthorized({ headers:  {
      'WWW-Authenticate': wwwAuthHeader
    }});
  }
  return hapiResponseAdapter.toInternalError();
}
Was this page helpful?
0 / 5 - 0 ratings