Support

Troubleshooting

Before reaching out in #delivery on Slack, double-check that your question wasn’t already covered here.

I cannot access my collection

  • Check that you can ping the server on the VPN - Make sure you were added in the appropriate VPN group (see Getting Started) - Join #engops on Slack to troubleshoot.

  • Check that you can login on the Admin UI

  • In the main-workspace bucket, check that you can create records in your collection (eg. main-workspace/tippytop)

I approved the changes, but still don’t see them

Frequently Asked Questions

How often the synchronization happens?

Synchronizations can be within 10 minutes of the change or in 24 hours.

There are two triggers for synchronization: a push notification and a polling check. Every five minutes a server side process checks for changes. If any changes are found a push notification will be sent and online clients will check in for updates. Clients that are offline or did not receive the push notification will either catch-up on next startup or automatically poll for changes every 24 hours.

What is the lag on the CDN?

The client uses the /v1/buckets/main/collections/{cid}/changeset endpoint, which requires a ?_expected={} query parameter. Since the Push notification contains the latest change timestamp, the first clients to pull the changes from the CDN will bust its cache.

When using the /records endpoint manually, without any query parameters, the CDN lag can be much higher (typically 1H).

How do I setup Firefox to pull data from STAGE?

The recommended way to setup Firefox to pull data from STAGE is to use the Remote Settings DevTools extension: switch the environment in the configuration section and click the Sync button.

Note

On Beta and Release, you have to run Firefox with the environment variable MOZ_REMOTE_SETTINGS_DEVTOOLS=1 to toggle environments.

Alternatively, in order to point STAGE before on fresh profiles for example, you can set the appropriate preferences in a user.js file:

user_pref("services.settings.server", "https://firefox.settings.services.allizom.org/v1");
user_pref("dom.push.serverURL", "https://autopush.stage.mozaws.net");

See developer docs to trigger a synchronization manually.

How do I preview the changes before approving?

The recommended way to setup Firefox to pull data from the preview collection is to use the Remote Settings DevTools extension: switch the environment to Preview and click the Sync button.

Note

On Beta and Release, you have to run Firefox with the environment variable MOZ_REMOTE_SETTINGS_DEVTOOLS=1 to toggle environments.

See developer docs about preview mode for manual toggling.

How do I preview the changes before requesting review?

Currently, this is not possible.

Possible workarounds:

How do I trigger a synchronization manually?

See developer docs.

How do I define default data for new profiles?

See developer docs about initial data.

How do I automate the publication of records? (one shot)

The Remote Settings server is a REST API (namely a Kinto instance). Records can be created in batches, and as seen in the multi signoff tutorial reviews can be requested and approved using PATCH requests.

If it is a one time run, then you can run the script as if it was you:

  1. Authenticate on the Admin UI

  2. On the top right corner, use the 📋 icon to copy the authentication string (eg. Bearer r43yt0956u0yj1)

  3. Use this header in your cURL commands (or Python/JS/Rust clients etc.)

curl 'https://remote-settings.allizom.org/v1/' \
  -H 'Authorization: Bearer r43yt0956u0yj1'

How do I automate the publication of records? (forever)

If the automation is meant to last (eg. cronjob, lambda, server to server) then the procedure would look like this:

  1. Get in touch with us on #delivery ;)

  2. Fork this repo as a base example

  3. Request a dedicated Kinto internal account to be created for you (eg. password-rules-publisher). Secret password should remain in a vault and managed by Ops.

  4. Request the Ops team to run your ingestion job (Bugzilla template)

With regards to the script:

  • MUST read the following environment variables:

    • AUTHORIZATION: Credentials for building the Authorization Header (eg. Bearer f8435u30596, some-user:some-password)

    • SERVER: Writer server URL (eg. https://remote-settings.allizom.org/v1)

    • ENVIRONMENT (optional): dev, stage, prod

    • DRY_RUN (optional): do not perform operations is set to 1

  • MUST exit with a 0 for success and a 1 if there were any errors.

  • MUST be idempotent (ie. no-op if no change)

  • MUST output logs to stdout

  • CAN request review on the collection (with PATCH {"data": {"status": "to-review"}})

  • CAN self approve changes if ENVIRONMENT==dev (with PATCH {"data": {"status": "to-sign"}})

See multi-signoff tutorial for more information about requesting and approving review.

With regards to the repository:

  • MUST build a Docker container

  • MUST contain a Github Action that will publish to Dockerhub once credentials are setup by Ops

We recommend the use of kinto-http.py (script example), but Node JS is also possible (See mdn-browser-compat-data or HIBP examples).

Note

Even if publication of records is done by a script, a human will have to approve the changes manually. Generally speaking, disabling dual sign-off is possible, but only in very specific cases.

If you want to skip manual approval, request a review of your design by the cloud operations security team.

Once data is ready in DEV or STAGE, how do we go live in PROD?

Stage and prod are aligned in terms of setup, features and versions.

Hence, once done in DEV or STAGE there is nothing specific / additional to do: you should be able to redo the same in PROD!

If you have a lot of data that you want to duplicate from one instance to another, you can use kinto-wizard to dump and load records!

pip install --user kinto-wizard

Dump the main records from STAGE:

kinto-wizard dump --records --server https://firefox.settings.services.allizom.org/v1 --bucket=main --collection=top-sites > top-sites.yaml

Open the .yaml file and rename the bucket name on top to main-workspace.

Login in the Remote Settings Admin and copy the authentication header (icon in the top bar), in order to use it in the --auth parameter of the kinto-wizard load command. And load into PROD:

kinto-wizard load --server https://remote-settings.mozilla.org/v1 --auth="Bearer uLdb-Yafefe....2Hyl5_w" top-sites.yaml

Requesting review can be done via the UI, or the command-line.

How many records does it support?

We already have use-cases that contain several hundreds of records, and it’s totally fine.

Nevertheless, if you have thousands of records that change very often, we should talk! Mostly in order to investigate the impact in terms of payload, bandwidth, signature verification etc.

Are there any size restrictions for a single record, or all records in a collection?

Quotas were not enabled on the server. Therefore, technically you can create records with any size, and have as many as you want in the collection.

However, beyond some reasonable size for the whole collection serialized as JSON, it is recommended using our attachments feature.

Using attachments on records, you can publish data of any size (as JSON, gzipped, etc.). It gets published on S3 and the records only contain metadata about the remote file (including hash, useful for signature verification).

Also does remote settings do any sort of compression for the records?

Is it possible to deliver remote settings to some users only?

By default, settings are delivered to every user.

You can add JEXL filters on records to define targets. Every record will be downloaded but the list obtained with .get() will only contain entries that match.

In order to limit the users that will download the records, you can check out our dedicated tutorial.

How does the client choose the collections to synchronize?

First, the client fetches the list of published collections.

Then, it synchronizes the collections that match one of the following:

  • it has an instantiated client — ie. a call to RemoteSettings("cid") was done earlier

  • some local data exists in the internal IndexedDB

  • a JSON dump was shipped in mozilla-central for this collection in services/settings/dumps/

How to debug JEXL expressions on records?

From a browser console, you can debug JEXL expressions using the raw libraries:

const { FilterExpressions } = ChromeUtils.import(
  "resource://gre/modules/components-utils/FilterExpressions.jsm"
);

await FilterExpressions.eval("a.b == 1", {a: {b: 1}});

In order to test using a real application context instead of an arbitrary object:

const { ClientEnvironmentBase } = ChromeUtils.import(
  "resource://gre/modules/components-utils/ClientEnvironment.jsm"
);

await FilterExpressions.eval("env.locale == 'fr-FR'", {env: ClientEnvironmentBase})