diff --git a/.gitignore b/.gitignore index 65364c0..e81c415 100644 --- a/.gitignore +++ b/.gitignore @@ -21,3 +21,4 @@ yarn-error.log* # VSCode Settings .vscode +.history/ diff --git a/README.md b/README.md index 76ba42f..197673a 100644 --- a/README.md +++ b/README.md @@ -19,11 +19,12 @@ Most changes are reflected live without having to restart the server. This website is built using [Docusaurus 2](https://docusaurus.io/). +Documentation is written using [common markdown syntax](https://commonmark.org/help/) or [MDX syntax](https://mdxjs.com/docs/what-is-mdx/#mdx-syntax) - the file extension will determine the syntax (.md for common markdown and .mdx for MDX). + ## Deployment The website is automatically deployed using Cloudfare Pages. - ## Contributing Feedback and pull requests appreciated! \ No newline at end of file diff --git a/docs/concepts/ipfs.md b/docs/concepts/ipfs.md deleted file mode 100644 index 5d643d3..0000000 --- a/docs/concepts/ipfs.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: IPFS ---- - -## Overview - -IPFS is a decentralized system to access websites, applications, files, and data using content addressing. IPFS stands for **InterPlanetary File System**. The fundamental idea underlying in this technology is to change the way a network of people and computers can exchange information amongst themselves. - -## Key Features - -- Distributed/decentralized system -- Uses content addressing -- Participation - -A decentralized system lets you access information or a file from multiple locations, which aren't managed by a single organization. The pro's of decentralization are - access to multiple locations to access data, easy to dodge content censorship, and faster file transfer. - -IPFS addresses a file by its content instead of its location. A content identifier is the cryptographic hash of the content at that address. It is unique to the content it came in from and permits you to verify if you got what you had requested for. - -For IPFS to work well, active participation of people is necessary. If you are sharing files using IPFS, you need to have copies of the shared files available on multiple computers, which are powered on and running IPFS. In a nutshell, many people provide access to each others files and participate in making them available when requested. Note that if you have downloaded a file using IPFS, by default your computer will share it further with others participants to share further. - -## How Does it Work? - -As discussed earlier, IPFS is a p2p (peer-to-peer) storage network. The IPFS ecosystem works with the following fundamental principles. - -1. Unique identification via content addressing -2. Content linking via directed acrylic graphs (DAGs) -3. Content discovery via distributed hash tables (DHTs) - -## Suggested Reading - -For more in-depth knowledge of the IPFS system refer to the [IPFS Conceptual documentation](https://docs.ipfs.io/concepts/). diff --git a/docs/concepts/libp2p.md b/docs/concepts/libp2p.md deleted file mode 100644 index ace7366..0000000 --- a/docs/concepts/libp2p.md +++ /dev/null @@ -1,24 +0,0 @@ -# libp2p -## Overview - -libp2p is a modular system which helps in the development of peer-to-peer network applications. The system comprises of protocols, specifications, and libraries. - -## What is Peer-to-peer? - -Most commonly used peer-to-peer applications include file sharing networks like bittorrent (used to download movies, files) and the recent uptrend of blockchain networks. Both these network types communicate in a peer-to-peer method. - -In a p2p network, participants (also known as nodes or peers) communicate with each other directly rather than using a **server** like the client/server model of data transfer. - -# Problems Solved by libp2p - -Of the many problems, the major ones which libp2p addresses include: -- Transport -- Identity -- Security -- Peer Routing -- Content Routing -- Messaging/PubSub - -## Suggested Reading - -For more in-depth knowledge of the libp2p system refer to the [libp2p Conceptual documentation](https://docs.libp2p.io/concepts/). \ No newline at end of file diff --git a/docs/BSL-License.md b/docs/defradb/BSL-License.md similarity index 99% rename from docs/BSL-License.md rename to docs/defradb/BSL-License.md index 0e4c848..a5018a4 100644 --- a/docs/BSL-License.md +++ b/docs/defradb/BSL-License.md @@ -1,5 +1,5 @@ --- -sidebar_position: 6 +sidebar_position: 7 title: BSL 1.1 License --- diff --git a/docs/concepts/_category_.json b/docs/defradb/concepts/_category_.json similarity index 100% rename from docs/concepts/_category_.json rename to docs/defradb/concepts/_category_.json diff --git a/docs/defradb/concepts/ipfs.md b/docs/defradb/concepts/ipfs.md new file mode 100644 index 0000000..5573386 --- /dev/null +++ b/docs/defradb/concepts/ipfs.md @@ -0,0 +1,102 @@ +--- +title: InterPlanetary File System (IPFS) +--- + +## Overview + +The **InterPlanetary File System (IPFS)** is a **distributed** system designed to enable peer-to-peer access to websites, applications, files, and data using **content addressing** instead of traditional location-based addressing. The fundamental goal of IPFS is to revolutionize how information is shared across networks by making it more **efficient, resilient, and censorship-resistant**. + +## Key Features + +IPFS is built on several core principles that distinguish it from conventional web technologies: + +- **Distributed Infrastructure:** Information is retrieved from multiple nodes instead of relying on a single centralized server. +- **Content Addressing:** Data is identified by its **cryptographic hash**, ensuring content integrity and eliminating reliance on specific locations (URLs). +- **Decentralized Participation:** The network thrives on active participation, where multiple users store and share files, making data more accessible and resilient to failures. + +## Why IPFS? Advantages of a Distributed Web + +Unlike traditional web systems that rely on centralized servers, IPFS offers several benefits: + +- **Resilience to Failures:** Since content is retrieved from multiple sources, it remains available even if some nodes go offline. +- **Faster Content Delivery:** By retrieving content from the nearest available node, IPFS can significantly reduce latency and bandwidth costs. +- **Censorship Resistance:** IPFS makes it difficult for a single entity to control or restrict access to content. +- **Efficient Storage:** Duplicate files are automatically deduplicated across the network, optimizing storage usage. + +## How IPFS Works + +IPFS operates using three key mechanisms: + +### 1. Content Addressing + +- Each file is assigned a **unique cryptographic hash** (Content Identifier or CID). +- Any change to the file results in a new hash, ensuring integrity and version control. + +### 2. Directed Acyclic Graphs (DAGs) for Content Linking + +- Data is structured as a **Merkle Directed Acyclic Graph (DAG)**, where each node contains links to its components. +- This allows for efficient data distribution and version tracking. + +### 3. Distributed Hash Tables (DHTs) for Content Discovery + +- When a user requests a file, IPFS looks up its CID in a **Distributed Hash Table** (DHT) to locate peers storing the requested content. +- The system retrieves the file from the nearest or most efficient source. + +## How to Use IPFS + +### 1. Adding a File to IPFS + +- A user adds a file to IPFS using an IPFS node. +- The file is broken into chunks and given a unique CID. +- The CID can be used to retrieve the file later. + +### 2. Retrieving a File from IPFS + +- Users request content by CID. +- IPFS locates the closest nodes storing that file and delivers the data in a peer-to-peer fashion. + +### 3. Pinning and Persistence + +- IPFS does not permanently store all files; users must "pin" files to keep them accessible on their own nodes. +- Content persistence is ensured by either pinning files manually or using **IPFS pinning services**. + +## Considerations and Limitations + +While IPFS offers significant advantages, users should be aware of: + +- **Storage Responsibility:** Files may disappear unless pinned or actively shared by multiple peers. +- **No Built-in Encryption:** While data integrity is ensured, encryption must be handled separately if needed. +- **Bandwidth Usage:** Nodes that participate in the network contribute bandwidth, which may impact performance on limited connections. + +## Getting Started with IPFS + +To start using IPFS: + +1. Install IPFS from the [official IPFS website](https://ipfs.io). +1. Initialize an IPFS node using: + +```sh +ipfs init +``` + +1. Add a file to IPFS: + +```sh +ipfs add myfile.txt +``` + +1. Retrieve a file using its CID + +```s +ipfs cat +``` + +## Further Reading + +For more in-depth knowledge, explore: + +- [IPFS Official Documentation](https://docs.ipfs.tech/) +- [IPFS Whitepaper](https://ipfs.io/ipfs/QmR7GSQM93Cx5eAg6a6vTyWKNXq8Rz5KZX3u37aS1WbDyB) +- [IPFS GitHub Repository](https://github.com/ipfs) + +IPFS represents a fundamental shift towards a more open, resilient, and decentralized internet, paving the way for the future of data distribution and web applications. diff --git a/docs/defradb/concepts/libp2p.md b/docs/defradb/concepts/libp2p.md new file mode 100644 index 0000000..108e3d7 --- /dev/null +++ b/docs/defradb/concepts/libp2p.md @@ -0,0 +1,65 @@ +# libp2p: A Modular Peer-to-Peer Networking Framework + +## Overview + +libp2p is a flexible and modular framework designed to simplify the development of peer-to-peer (P2P) network applications. It provides a suite of **protocols, specifications, and libraries** that allow applications to communicate efficiently without relying on centralized servers. + +Originally developed for IPFS, libp2p has evolved into a **standalone networking stack** used in decentralized applications, blockchain networks, and distributed systems. + +## Understanding Peer-to-Peer Networks + +Peer-to-peer (P2P) networking is a communication model where **nodes (peers) interact directly** with each other instead of depending on a central server. This approach contrasts with the traditional **client-server model**, where a server acts as the central point for data exchange. + +### Examples of P2P Networks: + +- **File-sharing networks** (e.g., BitTorrent) – Allow users to share and download files without a central server. +- **Blockchain networks** (e.g., Ethereum, Bitcoin) – Enable decentralized transaction validation and consensus mechanisms. + +By eliminating central authorities, P2P networks enhance **resilience, scalability, and censorship resistance** in distributed applications. + +## Key Challenges Solved by libp2p + +libp2p provides solutions to fundamental networking challenges that arise in P2P environments: + +### 1. **Transport Abstraction** + +- Supports multiple transport protocols (TCP, WebSockets, QUIC, etc.). +- Enables seamless communication across different network environments. + +### 2. **Identity & Security** + +- Uses **cryptographic identities** to verify peers. +- Ensures encrypted and authenticated communication between nodes. + +### 3. **Peer Routing** + +- Implements **Distributed Hash Tables (DHTs)** for efficient peer discovery. +- Helps nodes locate and connect with others dynamically. + +### 4. **Content Routing** + +- Allows efficient lookup and retrieval of data across the network. +- Optimizes distributed content addressing for performance and reliability. + +### 5. **Messaging & PubSub** + +- Supports **publish-subscribe (PubSub) messaging** for real-time data exchange. +- Facilitates decentralized event-driven communication in distributed applications. + +## Why Use libp2p? + +libp2p is widely adopted in decentralized technologies because of its: + +- **Modularity** – Developers can mix and match components based on project needs. +- **Interoperability** – Works across different networks, transport protocols, and applications. +- **Scalability** – Designed to handle thousands of peers efficiently. +- **Security** – Implements robust encryption and authentication mechanisms. + +## Getting Started with libp2p + +To start using libp2p, explore the following resources: + +- [libp2p Conceptual Documentation](https://docs.libp2p.io/concepts/) +- [libp2p GitHub Repository](https://github.com/libp2p/) + +libp2p is shaping the future of **decentralized communication** by enabling efficient, secure, and scalable peer-to-peer networking. diff --git a/docs/defradb/getting-started.md b/docs/defradb/getting-started.md new file mode 100644 index 0000000..9273bfa --- /dev/null +++ b/docs/defradb/getting-started.md @@ -0,0 +1,455 @@ +--- +sidebar_position: 1 +title: Getting Started +slug: /defradb +--- + +# DefraDB Overview + +![DefraDB Overview](/img/defradb-cover.png) + +DefraDB is an application-centric database that prioritizes data ownership, personal privacy, and information security. Its data model, powered by the convergence of [MerkleCRDTs](https://arxiv.org/pdf/2004.00107.pdf) and the content-addressability of [IPLD](https://docs.ipld.io/), enables a multi-write-master architecture. It features [DQL](./references/query-specification/query-language-overview.md), a query language compatible with GraphQL but providing extra convenience. By leveraging peer-to-peer infrastructure, it can be deployed nimbly in novel topologies. Access control is determined by a relationship-based DSL, supporting document or field-level policies, secured by the SourceHub infrastructure. DefraDB is a core part of the [Source technologies](https://source.network/) that enable new paradigms of local-first software, edge compute, access-control management, application-centric features, data trustworthiness, and much more. + +Disclaimer: At this early stage, DefraDB does not offer data encryption, and the default configuration exposes the database to the infrastructure. The software is provided "as is" and is not guaranteed to be stable, secure, or error-free. We encourage you to experiment with DefraDB and provide feedback, but please do not use it for production purposes until it has been thoroughly tested and developed. + +## Install + +Install `defradb` by [downloading an executable](https://github.com/sourcenetwork/defradb/releases) or building it locally using the [Go toolchain](https://golang.org/): + +```bash +git clone git@github.com:sourcenetwork/defradb.git +cd defradb +make install +``` + +Ensure `defradb` is included in your `PATH`: + +```bash +export PATH=$PATH:$(go env GOPATH)/bin +``` + +We recommend experimenting with queries using a native GraphQL client. [GraphiQL](https://altairgraphql.dev/#download) is a popular option. + +## Key Management - Initial Setup + +DefraDB has a built-in keyring for storing private keys securely. Keys are loaded at startup, and a secret must be provided via the `DEFRA_KEYRING_SECRET` environment variable. The following keys are loaded from the keyring on start: + +- `peer-key` Ed25519 private key (required) +- `encryption-key` AES-128, AES-192, or AES-256 key (optional) +- `node-identity-key` Secp256k1 private key (optional). This key is used for node's identity. + +If a `.env` file is available in the working directory, the secret can be stored there or via a file at a path defined by the `--secret-file` flag. + +Keys will be randomly generated on the initial start of the node if they are not found. If not, to generate keys: + +```bash +defradb keyring generate +``` + +Import external keys: + +```bash +defradb keyring import +``` + +To learn more about the available options: + +```bash +defradb keyring --help +``` + +NOTE: Node identity is an identity assigned to the node. It is used to exchange encryption keys with other nodes. + +## Start + +Start a node by executing: + +```bash +defradb start +``` + +Verify the local connection: + +```bash +defradb client collection describe +``` + +## Configuration + +DefraDB uses a default configuration: + +- Data directory: `~/.defradb/` +- GraphQL endpoint: `http://localhost:9181/api/v0/graphql` + +The `client` command interacts with the locally running node. + +The GraphQL endpoint can be used with a GraphQL client (e.g., Altair) to conveniently perform requests (`query`, `mutation`) and obtain schema introspection. Read more about [configuration options](./references/config.md). + +## Add a schema type + +Define and add a schema type. + +```bash +defradb client schema add ' + type User { + name: String + age: Int + verified: Boolean + points: Float + } +' +``` + +For more examples of schema type definitions, see the [examples/schema/](examples/schema/) folder. + +## Create a document + +Submit a `mutation` request to create a document of the `User` type: + +```bash +defradb client query ' + mutation { + create_User(input: {age: 31, verified: true, points: 90, name: "Bob"}) { + _docID + } + } +' +``` + +Expected response: + +```json +{ + "data": { + "create_User": [ + { + "_docID": "bae-91171025-ed21-50e3-b0dc-e31bccdfa1ab", + } + ] + } +} +``` + +`_docID` is the document's unique identifier determined by its schema and initial data. + +## Query documents + +Once you have populated your node with data, you can query it: + +```bash +defradb client query ' + query { + User { + _docID + age + name + points + } + } +' +``` + +This query obtains *all* users and returns their fields `_docID, age, name, points`. GraphQL queries only return the exact fields requested. + +You can further filter results with the `filter` argument. + +```bash +defradb client query ' + query { + User(filter: {points: {_ge: 50}}) { + _docID + age + name + points + } + } +' +``` + +This returns only user documents which have a value for the `points` field *Greater Than or Equal to* (`_ge`) 50. + +## Obtain document commits + +DefraDB's data model is based on [MerkleCRDTs](https://arxiv.org/pdf/2004.00107.pdf). Each document has a graph of all of its updates, similar to Git. The updates are called `commit`s and are identified by `cid`, a content identifier. Each references its parents by their `cid`s. To get the most recent commit in the MerkleDAG for the document identified as `bae-91171025-ed21-50e3-b0dc-e31bccdfa1ab`: + +```bash +defradb client query ' + query { + latestCommits(docID: "bae-91171025-ed21-50e3-b0dc-e31bccdfa1ab") { + cid + delta + height + links { + cid + name + } + } + } +' +``` + +It returns a structure similar to the following, which contains the update payload that caused this new commit (`delta`) and any subgraph commits it references. + +```json +{ + "data": { + "latestCommits": [ + { + "cid": "bafybeifhtfs6vgu7cwbhkojneh7gghwwinh5xzmf7nqkqqdebw5rqino7u", + "delta": "pGNhZ2UYH2RuYW1lY0JvYmZwb2ludHMYWmh2ZXJpZmllZPU=", + "height": 1, + "links": [ + { + "cid": "bafybeiet6foxcipesjurdqi4zpsgsiok5znqgw4oa5poef6qtiby5hlpzy", + "name": "age" + }, + { + "cid": "bafybeielahxy3r3ulykwoi5qalvkluojta4jlg6eyxvt7lbon3yd6ignby", + "name": "name" + }, + { + "cid": "bafybeia3tkpz52s3nx4uqadbm7t5tir6gagkvjkgipmxs2xcyzlkf4y4dm", + "name": "points" + }, + { + "cid": "bafybeia4off4javopmxcdyvr6fgb5clo7m5bblxic5sqr2vd52s6khyksm", + "name": "verified" + } + ] + } + ] + } +} +``` + +Obtain a specific commit by its content identifier (`cid`): + +```graphql +defradb client query ' + query { + commits(cid: "bafybeifhtfs6vgu7cwbhkojneh7gghwwinh5xzmf7nqkqqdebw5rqino7u") { + cid + delta + height + links { + cid + name + } + } + } +' +``` + +## DefraDB Query Language (DQL) + +DQL is compatible with GraphQL but features various extensions. + +Read its documentation [here](./references/query-specification/query-language-overview.md) to discover its filtering, ordering, limiting, relationships, variables, aggregate functions, and other useful features. + +## Peer-to-peer data synchronization + +DefraDB leverages peer-to-peer networking for data exchange, synchronization, and replication of documents and commits. + +When starting a node for the first time, a key pair is generated and stored in its "root directory" (`~/.defradb/` by default). + +Each node has a unique `PeerID` generated from its public key. This ID allows other nodes to connect to it. + +To view your node's peer info: + +```bash +defradb client p2p info +``` + +There are two types of peer-to-peer relationships supported: **pubsub** peering and **replicator** peering. + +Pubsub peering *passively* synchronizes data between nodes by broadcasting *Document Commit* updates to the topic of the commit's document key. Nodes need to be listening on the pubsub channel to receive updates. This is for when two nodes *already* have shared a document and want to keep them in sync. + +Replicator peering *actively* pushes changes from a specific collection *to* a target peer. + +
+Pubsub example + +Pubsub peers can be specified on the command line using the `--peers` flag, which accepts a comma-separated list of peer [multiaddresses](https://docs.libp2p.io/concepts/addressing/). For example, a node at IP `192.168.1.12` listening on 9000 with PeerID `12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B` would be referred to using the multiaddress `/ip4/192.168.1.12/tcp/9000/p2p/12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B`. + +Let's go through an example of two nodes (*nodeA* and *nodeB*) connecting with each other over pubsub, on the same machine. + +Start *nodeA* with a default configuration: + +```bash +defradb start +``` + +Obtain the node's peer info: + +```bash +defradb client p2p info +``` + +In this example, we use `12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B`, but locally it will be different. + +For *nodeB*, we provide the following configuration: + +```bash +defradb start --rootdir ~/.defradb-nodeB --url localhost:9182 --p2paddr /ip4/127.0.0.1/tcp/9172 --peers /ip4/127.0.0.1/tcp/9171/p2p/12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B +``` + +About the flags: + +- `--rootdir` specifies the root dir (config and data) to use +- `--url` is the address to listen on for the client HTTP and GraphQL API +- `--p2paddr` is a comma-separated list of multiaddresses to listen on for p2p networking +- `--peers` is a comma-separated list of peer multiaddresses + +This starts two nodes and connects them via pubsub networking. +
+ +
+Subscription example + +It is possible to subscribe to updates on a given collection by using its ID as the pubsub topic. The ID of a collection is found as the field `collectionID` in one of its documents. Here we use the collection ID of the `User` type we created above. After setting up 2 nodes as shown in the [Pubsub example](#pubsub-example) section, we can subscribe to collections updates on *nodeA* from *nodeB* by using the following command: + +```bash +defradb client p2p collection add --url localhost:9182 bafkreibpnvkvjqvg4skzlijka5xe63zeu74ivcjwd76q7yi65jdhwqhske +``` + +Multiple collection IDs can be added at once. + +```bash +defradb client p2p collection add --url localhost:9182 ,, +``` + +
+ +
+Replicator example + +Replicator peering is targeted: it allows a node to actively send updates to another node. Let's go through an example of *nodeA* actively replicating to *nodeB*: + +Start *nodeA*: + +```bash +defradb start +``` + +In another terminal, add this example schema to it: + +```bash +defradb client schema add ' + type Article { + content: String + published: Boolean + } +' +``` + +Start (or continue running from above) *nodeB*, that will be receiving updates: + +```bash +defradb start --rootdir ~/.defradb-nodeB --url localhost:9182 --p2paddr /ip4/0.0.0.0/tcp/9172 +``` + +Here we *do not* specify `--peers` as we will manually define a replicator after startup via the `rpc` client command. + +In another terminal, add the same schema to *nodeB*: + +```bash +defradb client schema add --url localhost:9182 ' + type Article { + content: String + published: Boolean + } +' +``` + +Then copy the peer info from *nodeB*: + +```bash +defradb client p2p info --url localhost:9182 +``` + +Set *nodeA* to actively replicate the Article collection to *nodeB*: + +```bash +defradb client p2p replicator set -c Article +``` + +As we add or update documents in the Article collection on *nodeA*, they will be actively pushed to *nodeB*. Note that changes to *nodeB* will still be passively published back to *nodeA*, via pubsub. +
+ +## Securing the HTTP API with TLS + +By default, DefraDB will expose its HTTP API at `http://localhost:9181/api/v0`. It's also possible to configure the API to use TLS with self-signed certificates or Let's Encrypt. + +To start defradb with self-signed certificates placed under `~/.defradb/certs/` with `server.key` +being the public key and `server.crt` being the private key, just do: + +```bash +defradb start --tls +``` + +The keys can be generated with your generator of choice or with `make tls-certs`. + +Since the keys should be stored within the DefraDB data and configuration directory, the recommended key generation command is `make tls-certs path="~/.defradb/certs"`. + +If not saved under `~/.defradb/certs` then the public (`pubkeypath`) and private (`privkeypaths`) key paths need to be explicitly defined in addition to the `--tls` flag or `tls` set to `true` in the config. + +Then to start the server with TLS, using your generated keys in custom path: + +```bash +defradb start --tls --pubkeypath ~/path-to-pubkey.key --privkeypath ~/path-to-privkey.crt + +``` + +## Access Control System + +Read more about the access control [here](./references/acp.md). + +## Supporting CORS + +When accessing DefraDB through a frontend interface, you may be confronted with a CORS error. That is because, by default, DefraDB will not have any allowed origins set. To specify which origins should be allowed to access your DefraDB endpoint, you can specify them when starting the database: + +```bash +defradb start --allowed-origins=https://yourdomain.com +``` + +If running a frontend app locally on localhost, allowed origins must be set with the port of the app: + +```bash +defradb start --allowed-origins=http://localhost:3000 +``` + +The catch-all `*` is also a valid origin. + +## External port binding + +By default the HTTP API and P2P network will use localhost. If you want to expose the ports externally you need to specify the addresses in the config or command line parameters. + +```bash +defradb start --p2paddr /ip4/0.0.0.0/tcp/9171 --url 0.0.0.0:9181 +``` + +## Backing up and restoring + +It is currently not possible to do a full backup of DefraDB that includes the history of changes through the Merkle DAG. However, DefraDB currently supports a simple backup of the current data state in JSON format that can be used to seed a database or help with transitioning from one DefraDB version to another. + +To backup the data, run the following command: + +```bash +defradb client backup export path/to/backup.json +``` + +To pretty print the JSON content when exporting, run the following command: + +```bash +defradb client backup export --pretty path/to/backup.json +``` + +To restore the data, run the following command: + +```bash +defradb client backup import path/to/backup.json +``` + +## Conclusion + +This gets you started to use DefraDB. Read on the documentation website for guides and further information. diff --git a/docs/guides/_category_.json b/docs/defradb/guides/_category_.json similarity index 100% rename from docs/guides/_category_.json rename to docs/defradb/guides/_category_.json diff --git a/docs/defradb/guides/akash-deployment.md b/docs/defradb/guides/akash-deployment.md new file mode 100644 index 0000000..4caceec --- /dev/null +++ b/docs/defradb/guides/akash-deployment.md @@ -0,0 +1,146 @@ +--- +sidebar_label: Akash Deployment Guide +sidebar_position: 70 +draft: true +--- +# Deploy DefraDB on Akash + +## Overview + +This guide will walk you through the required steps to deploy DefraDB on Akash. + +## Prerequisites + +Before you get started you will need an Akash account with at least 5 AKT. If don't have an Akash account you can create one by installing [Keplr](https://www.keplr.app/). + +## Deploy + +![Cloudmos console](/img/akash/deploy.png "Cloudmos console") + +Deploying on Akash can be done through the [Cloudmos console](https://deploy.cloudmos.io/new-deployment). Click on the "Empty" deployment type and copy the config below into the editor. + +```yaml +--- +version: "2.0" + +services: + defradb: + image: sourcenetwork/defradb:develop + args: + - start + - --url=0.0.0.0:9181 + expose: + - port: 9171 + as: 9171 + to: + - global: true + - port: 9181 + as: 80 + to: + - global: true + +profiles: + compute: + defradb: + resources: + cpu: + units: 1.0 + memory: + size: 1Gi + storage: + size: 1Gi + placement: + akash: + attributes: + host: akash + signedBy: + anyOf: + - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" + - "akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4" + pricing: + defradb: + denom: uakt + amount: 10000 + +deployment: + defradb: + akash: + profile: defradb + count: 1 +``` + +Next click the "Create Deployment" button. A pop-up will appear asking you to confirm the configuration transaction. + +After confirming you will be prompted to select a provider. Select a provider with a price and location that makes sense for your use case. + +A final pop-up will appear asking you to confirm the deployment transaction. If the deployment is successful you should now see deployment info similar to the image below. + +## Deployment Info + +![Cloudmos deployment](/img/akash/info.png "Cloudmos deployment") + +To configure and interact with your DefraDB node, you will need the P2P and API addresses. They can be found at the labeled locations in the image above. + +## P2P Replication + +To replicate documents from a local DefraDB instance to your Akash deployment you will need to create a shared schema on both nodes. + +Run the commands below to create the shared schema. + +First on the local node: + +```bash +defradb client schema add ' + type User { + name: String + age: Int + } +' +``` + +Then on the Akash node: + +```bash +defradb client schema add --url ' + type User { + name: String + age: Int + } +' +``` + +> The API address can be found in the [deployment info](#deployment-info). + +Next you will need the peer ID of the Akash node. Run the command below to view the node's peer info. + +```bash +defradb client p2p info --url +``` + +If the command is successful, you should see output similar to the text below. + +```json +{ + "ID": "12D3KooWQr7voGBQPTVQrsk76k7sYWRwsAdHRbRjXW39akYomLP3", + "Addrs": [ + "/ip4/0.0.0.0/tcp/9171" + ] +} +``` + +> The address here is the node's p2p bind address. The public p2p address can be found in the [deployment info](#deployment-info). + +Setup the replicator from your local node to the Akash node by running the command below. + +```bash +defradb client p2p replicator set --collection User '{ + "ID": "12D3KooWQr7voGBQPTVQrsk76k7sYWRwsAdHRbRjXW39akYomLP3", + "Addrs": [ + "/dns//" + ] +}' +``` + +> The p2p host and port can be found in the [deployment info](#deployment-info). For example: if your p2p address is http://provider.bdl.computer:32582/ the host would be provider.bdl.computer and the port would be 32582. + +The local node should now be replicating all User documents to the Akash node. diff --git a/docs/defradb/guides/content-addressable-storage.md b/docs/defradb/guides/content-addressable-storage.md new file mode 100644 index 0000000..97dfa87 --- /dev/null +++ b/docs/defradb/guides/content-addressable-storage.md @@ -0,0 +1,98 @@ +--- +sidebar_label: Content Addressable Storage +sidebar_position: 80 +--- + +# Content Addressable Storage + +## Overview + +Content-Addressable Storage (CAS) is a way to store data that works differently from what you might be used to. Normally, when you save something on your computer or online, you find it by where it is stored, like a file path or a website address. But with CAS, each piece of data gets its own special ID based on what it actually is. + +This special ID comes from running the data through a hash function, which turns it into a unique digest. If the data changes even a little, the digest changes too. Content-addressable storage can tell if someone tried to change or mess with the data. It also saves space because if two pieces of data are exactly the same, it stores that data only once. + +This method matters because it helps keep data safe and trustworthy. It makes it easy to track different versions over time and works well in systems where many computers share data with each other. + +## How DefraDB uses CAS + +DefraDB’s data model is built on IPLD (InterPlanetary Linked Data), which connects and represents data using Merkle Directed Acyclic Graphs (Merkle DAGs) and hash-based addressing. Here’s what these terms mean in simple language: + +* **IPLD**: This is a way to represent and connect data across distributed systems using hashes. It makes data universally linkable and verifiable. + +* **Merkle DAGs**: In DefraDB, every document and every update is a node in a Merkle DAG. Each change creates a “commit,” similar to Git. The commit has its own hash and also links back to earlier commits by their hashes. + +* **Hash-based addressing**: Each version of a document, or even a field update, is given a unique identifier called a CID (Content Identifier). The CID is generated from the content itself, so if the content changes, the CID changes too. + +* **Storage and retrieval**: When you create or update a document, DefraDB saves the difference (the part that changed) and assigns it a CID. To fetch the data, a user or peer asks for the CID. The system then finds the content and verifies it by recalculating the hash. + +## Why it matters + +Using CAS in DefraDB brings many benefits that make data safer, easier to manage, and more flexible. + +* **Immutability and auditability**: Every time you change a document, DefraDB records that update permanently. You can always see what happened and when, making the data trustworthy. + +* **Deduplication**: DefraDB stores only one copy of identical data because it identifies data by its content. This saves space and makes storage efficient. + +* **Tamper-evidence**: If someone changes data without permission, the content hash stops matching. DefraDB can detect this easily. + +* **Peer-to-peer friendly**: CAS works well when many computers share data directly. DefraDB syncs updates quickly, even when offline or on weak networks. + +* **Efficient versioning**: DefraDB saves every change with its own ID. You can go back to any earlier version of the data, making “time travel” through history possible. + +## CAS in Action + +Here’s how it works step by step: + +* **Storing data**: When you create a new document, like a user profile, DefraDB calculates a unique digest called a CID by hashing the document content. The CID becomes the document’s permanent ID. DefraDB stores the document under that CID. If two documents have the same content, then they share the same CID and DefraDB stores the data only once. + +* **Updating data**: When you change a document, DefraDB does not replace the old data. It saves the update as a separate new node, linking it to the previous version and forming a chain called a Merkle DAG. Each update gets its own CID representing a new version. DefraDB keeps the full change history this way. + +## Synchronization Process + +Content-addressable storage gives DefraDB a strong foundation to manage data across devices, users, and network conditions. Here is how it supports key features step by step: + +### Supporting CRDTs for Conflict-free Collaboration + +DefraDB implements **Merkle CRDTs**, a specialized type of Conflict-free Replicated Data Type that combines traditional CRDT merge semantics with Merkle DAGs for efficient distributed collaboration: + +**Merkle Clock Implementation:** + +1. Each document change creates a new node in the Merkle DAG with a unique CID and a height value (incremental counter) +2. The Merkle clock uses the inherent causality of the DAG structure—since node A's CID is embedded in node B, A must exist before B +3. This eliminates the need to maintain per-peer metadata, making DefraDB efficient in high-churn networks with unlimited peers + +**Delta State CRDT Semantics:** + +1. DefraDB uses delta state-based CRDTs that only transmit the minimum "delta" needed to transform one state to another +2. Instead of sending entire document states, only the changed portion (like adding "banana" to a fruit set) is transmitted +3. This hybrid approach provides the benefits of both operation-based (small message size) and state-based (reliable delivery) CRDTs + +**Branching and Merging:** + +1. When peers make concurrent edits, the Merkle DAG naturally branches into independent states +2. Each branch maintains its own valid history until synchronization occurs +3. Merging creates a new "merge node" with multiple parents, applying CRDT-specific merge semantics +4. The system finds common ancestral nodes using height parameters and CIDs to resolve conflicts deterministically + +**Conflict Resolution Process:** + +1. When conflicts occur, DefraDB traverses both branches back to their common ancestor +2. The embedded CRDT type (register, counter, set, etc.) defines the specific merge rules +3. All changes are preserved in the final merged state, ensuring no data loss +4. The resulting merge maintains the DAG structure and provides a new canonical head + +### Enabling Efficient Synchronization Across Peers + +1. Peers exchange CIDs representing the latest document versions. +1. Compare received CIDs with its own and requests only missing data. +1. Verify data by recalculating the hash and matching it to the CID. Peers reject any data that does not match. +1. Verified data is added to the local Merkle DAG to update the document history. + +### Making Offline-first Work Smoothly + +1. Users make changes locally even without internet. Each update gets a new CID and joins the local Merkle DAG. +1. When online again, devices share new CIDs and sync changes. +1. DefraDB merges updates from different peers using CRDT rules using full change histories. +1. This process ensures all peers arrive at the same up-to-date data without conflicts or loss. + +Overall, content-addressable storage lets DefraDB create reliable, easy-to-sync, and conflict-free data systems that work online and offline. diff --git a/docs/defradb/guides/content-identifier.md b/docs/defradb/guides/content-identifier.md new file mode 100644 index 0000000..556d176 --- /dev/null +++ b/docs/defradb/guides/content-identifier.md @@ -0,0 +1,94 @@ +--- +sidebar_label: Content Identifiers (CID) +sidebar_position: 90 +--- + +# Content Identifiers (CID) + +## Overview + +Content Identifiers (CIDs) are foundational in content-addressable storage (CAS) systems, providing a globally unique, self-describing reference to digital content based on its data rather than its location. CIDs enable systems to efficiently and securely retrieve, verify, link, and manage data, enabling immutable and decentralized data storage solutions such as IPFS, IPLD, and DefraDB. + +## Why Content Identifiers Matter + +Traditional web addresses (URLs) tell you **where** data lives—on a specific server, at a specific location. + +Content Identifiers tell you **what** the data is—a unique fingerprint of the content itself. + +This fundamental shift enables: + +- **Decentralized architecture:** Any node can serve data, not just the original source +- **Self-verifying data:** Content proves its own integrity through cryptographic hashing +- **Permanent links:** References that never break, even when data moves +- **Automatic deduplication:** The same content always has the same identifier, eliminating redundant storage +- **True data portability:** Content can move freely between platforms while maintaining its identity + +This transformation from location-based to content-based addressing is as significant as the shift from IP addresses to domain names—but instead of making locations human-readable, CIDs make content itself addressable. + +## Content Identifier Basics + +A **CID** uniquely identifies data by combining a cryptographic hash with encoding metadata. This makes a CID: + +- **Deterministic:** No randomness—the same input always yields the same CID +- **Consistent across locations:** The same content always produces the same CID +- **Unique:** Different content results in different CIDs +- **Self-describing:** The identifier encodes what the data is and how to verify it + +### Understanding Cryptographic Hashes + +To understand how CIDs achieve these properties, we first need to understand the cryptographic hashes that power them. + +A **cryptographic hash** is a mathematical function that takes input data of any size and transforms it into a fixed-length string of bits, called a hash value or digest. This process is: + +- **Deterministic:** The same input always produces the same output +- **Collision-resistant:** Different inputs produce different outputs +- **One-way:** Cannot reverse the hash to get the original data +- **Sensitive:** Even a tiny change in input results in a completely different hash value + +Content-Addressable Storage (CAS) uses these cryptographic fingerprints to store and access data, ensuring integrity and enabling efficient deduplication. + +### Key CID Properties + +| Property | Description | Technical Benefit | +|----------|-------------|-------------------| +| **Immutability** | Any change to content changes the CID | Enables trustless verification | +| **Deduplication** | Same content anywhere yields the same CID | Reduces storage significantly in typical datasets | +| **Integrity verification** | CIDs ensure the authenticity of retrieved data | Cryptographic proof of data integrity | +| **Versioning** | Unique CIDs support tracking content over time | Implicit version control | + +## CID Structure + +With these fundamentals in place, let's examine how CIDs are actually structured and what each component does. + +### Visual Overview + +A CID consists of multiple components that work together to create a self-describing content identifier: +```bash +bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi +│ └┬┘└───────────────────────────────────────────────────────┘ +│ │ │ +│ │ └── Base32-encoded multihash +│ └──────────────────────────────── Multicodec (dag-pb) +└──────────────────────────────────── Multibase prefix (b = base32) +``` + +### Component Breakdown + +- **Multibase prefix:** Indicates how the CID is encoded (like choosing between binary and text). This allows CIDs to be represented in different formats for different use cases +- **Multicodec:** Specifies what format the content uses (raw bytes, JSON, CBOR, etc.). This tells systems how to interpret the data +- **Multihash:** Contains the actual cryptographic fingerprint of your content, along with information about which hash function was used + +### Technical Components + +| Component | Description | Details | Example Values | +|-----------|-------------|---------|----------------| +| **Multibase prefix** | Specifies encoding format | First character(s) of the CID | `b` (base32), `z` (base58btc), `f` (base16) | +| **Multicodec** | Identifies content type/format | Varint-encoded codec identifier | `0x70` (dag-pb), `0x71` (dag-cbor), `0x55` (raw) | +| **Multihash** | Hash function and digest | Function ID + digest length + digest | SHA-256, Blake2b-256, SHA-3 | + +### CID Versions + +| Version | Details | Example CID | Binary Structure | +|---------|---------|-------------|------------------| +| **CIDv0** | Base58btc encoding, supports only dag-pb and SHA-256 | `QmYwAPJzv5CZsnA...` | `` only | +| **CIDv1** | Supports multiple codecs, hash functions, and encodings | `bafybeigdyrzt5sf...` | `` | diff --git a/docs/defradb/guides/deployment-guide.md b/docs/defradb/guides/deployment-guide.md new file mode 100644 index 0000000..dfb6bba --- /dev/null +++ b/docs/defradb/guides/deployment-guide.md @@ -0,0 +1,156 @@ +--- +sidebar_label: Deployment Guide +sidebar_position: 80 +--- +# A Guide to DefraDB Deployment +DefraDB aspires to be a versatile database, supporting both single-node and clustered deployments. In a clustered setup, multiple nodes collaborate seamlessly. This guide walks you through deploying DefraDB, from single-node configurations to cloud and server environments. Let’s begin. + +## Prerequisites +The prerequisites listed in this section should be met before starting the deployment process. + +**Pre-Compiled Binaries** - Each release has its own set of pre-compiled binaries for different Operating Systems. Obtain the pre-compiled binaries for your operating system from the [official releases](https://github.com/sourcenetwork/defradb/releases). + +### Bare Metal Deployment + +For Bare Metal deployments, there are two methods available: + +- ### Building from Source + +Ensure Git, Go and make are installed for all your development environments. + +1. **Unix (Mac and Linux)** - The main thing required is the [Go language toolchain](https://go.dev/dl/), which is supported up to Go 1.20 in DefraDB due to the current dependencies. +2. **Windows** - Install the [MinGW toolchain](https://www.mingw-w64.org/) specific to GCC and add the [Make toolchain](https://www.gnu.org/software/make/). + +Follow these steps to build from source: + +1. Run git clone to download the [DefraDB repository](https://github.com/sourcenetwork/defradb#install) to your local machine. +2. Navigate to the repository using `cd`. +3. Execute the Make command to build a local DefraDB setup with default configurations. +4. Set the compiler and build tags for the playground: `GOFLAGS="-tags=playground"` + +#### Build Playground + +Refer to the Playground Basics Guide for detailed instructions. + +1. Compile the playground separately using the command: `make deps:playground` +2. This produces a bundle file in a folder called dist. +3. Set the environment variable using the [NodeJS language toolchain](https://nodejs.org/en/download/current) and npm to build locally on your machine. The JavaScript and Typescript code create an output bundle for the frontend code to work. +4. Build a specific playground version of DefraDB. Use the go flags environment variable, instructing the compiler to include the playground directly embedded in all files. Execute the [go binary embed](https://pkg.go.dev/embed) command, producing a binary of approximately 4MB. + + + +- ### Docker Deployments + +Docker deployments are designed for containerized environments. The main prerequisite is that Docker should be installed on your machine. + + +The steps for Docker deployment are as follows: + +1. Install Docker by referring to the [official Docker documentation](https://docs.docker.com/get-docker/). +2. Navigate to the root of the repository where the Dockerfile is located. +3. Run the following command: +`docker build -t defra -f tools/defradb.containerfile ` + + +**Note**: The period at the end is important and the -f flag specifies the file location. + +The container file is in a subfolder called tools: `path: tools/defradb.containerfile` + +Docker images streamline the deployment process, requiring fewer dependencies. This produces a DefraDB binary file for manual building and one-click deployments, representing the database in binary form as a system. + +## Deployment + +### Manual Deployment + +DefraDB is a single statically built binary with no third-party dependencies. Similar to bare metal, it can run on any cloud or machine. Execute the following command to start DefraDB: +`defradb start --store badger` + + + +### AWS Environment + +For deploying to an AWS environment, note the following: + +- Deploy effortlessly with a prebuilt [AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (Amazon Machine Image) featuring DefraDB. +- Access the image ID or opt for the convenience of the Amazon Marketplace link. +- Refer to [AWS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html) for an easy EC2 instance launch with your specified image size. +- Customize your setup using Packer and Terraform scripts in this directory: `tools/cloud/aws/packer` + +  + +### Akash Deployments + +For detailed instructions on deploying DefraDB with Akash, refer to the [Akash Deployment Guide](https://nasdf-feat-akash-deploy.docs-source-network.pages.dev/guides/akash-deployment). + +  + +## Configurations + +- The default root directory on Unix machines is `$HOME/.defradb`. For Windows it is `%USERPROFILE%\.defradb`​. +- Specifiy the DefraDB folder with this command: `defradb --rootdir start`. +- The default directory for where data is specified is `/data`. + +  + +## Storage Engine + +The storage engines currently used include: + +- Fileback persistent storage powered the [Badger](https://github.com/dgraph-io/badger%5D ) database. +- [In-Memory Storage](https://github.com/sourcenetwork/defradb/blob/develop/datastore/memory/memory.go) which is B-Tree based, ideal for testing does not work with the file system. It is specified with this flag: `--store memory` + +  + +## Network and Connectivity + +As a P2P database, DefraDB requires two ports for node communication, they include: + +  + +1. **API Port**: It powers the HTTP API, handling queries  from the client to the database  and various API commands. The default port number is *9181*. + +2. **P2P Port**: It facilitates communication between nodes, supporting data sharing, synchronization, and replication. The default port no is *9171*. + +  + +The P2P networking functionality can't be disabled entirely, but you can use the `defradb start --no-p2p`​ command through the config files and CLI to deactivate it. + +  + +### Port Customization + +The API port can be specified using the [bind address](https://docs.libp2p.io/concepts/fundamentals/addressing/): + +`API: --url :` + +For P2P use the P2P adder to a multi-address: + +`--p2paddr ` + +Here is an [infographic](https://images.ctfassets.net/efgoat6bykjh/XQrDLqpkV06rFhT24viJc/1c2c72ddebe609c80fc848bfa9c4771e/multiaddress.png) to further understand multi-address. + + +## The Peer Key + +Secure communication between nodes in DefraDB is established with a unique peer key for each node. Key details include: + +  + +- The peer key is automatically generated on startup, replacing the key file in a specific path. +- There is no current method for generating a new key except for overwriting an existing one. +- The peer key type uses a specific elliptic curve, called an Ed25519, which can be used to generate private keys. +- In-memory mode generates a new key with each startup. +- The config file located at `/config.yaml` is definable and used for specification. +- Additional methods for users to generate their own Ed25519 key: +openssl genpkey -algorithm ed25519 -text + +## Future Outlook + +As DefraDB evolves, the roadmap includes expanding compatibility with diverse deployment environments: + +- **Google Cloud Platform (GCP)**: Tailored deployment solutions for seamless integration with GCP environments. +- **Kubernetes**: Optimization for Kubernetes deployments, ensuring scalability and flexibility. +- **Embedded/IoT for Small Environments**: Adaptations to cater to the unique demands of embedded systems and IoT applications. +- **Web Assembly (WASM) Deployments**: Exploring deployment strategies utilizing Web Assembly for enhanced cross-platform compatibility. + +  \ No newline at end of file diff --git a/docs/guides/explain-systems.md b/docs/defradb/guides/explain-systems.md similarity index 99% rename from docs/guides/explain-systems.md rename to docs/defradb/guides/explain-systems.md index bb65485..3620e18 100644 --- a/docs/guides/explain-systems.md +++ b/docs/defradb/guides/explain-systems.md @@ -13,7 +13,7 @@ The DefraDB Explain System is a powerful tool designed to introspect requests, e ```graphql query { Author { - _key + _docID name age } @@ -25,7 +25,7 @@ query { ```graphql query @explain { Author { - _key + _docID name age } @@ -74,7 +74,7 @@ Having the plan arranged as parts in a graph is helpful because it's both fast t ### Simple Explain -Simple Explain Requests is the default mode for explanation, only requiring the additional `@explain` directive. You can also be explicit and provide a type argument to the directive like this `@explain(type: simple)`. ` +Simple Explain Requests is the default mode for explanation, only requiring the additional `@explain` directive. You can also be explicit and provide a type argument to the directive like this `@explain(type: simple)`. This mode of explanation returns only the syntactic and structural information of the Plan Graph, its nodes, and their attributes. diff --git a/docs/defradb/guides/merkle-crdt.md b/docs/defradb/guides/merkle-crdt.md new file mode 100644 index 0000000..a4c20ea --- /dev/null +++ b/docs/defradb/guides/merkle-crdt.md @@ -0,0 +1,61 @@ +--- +sidebar_label: Merkle CRDT Guide +sidebar_position: 30 +--- +# A Guide to Merkle CRDTs in DefraDB + +## Overview + +Merkle CRDTs are a type of Conflict-free Replicated Data Type (CRDT). They are designed to support independent updates across multiple peers and to merge those updates automatically without conflicts. The goal is to achieve deterministic, automatic data synchronization while maintaining consistency. [CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) were first formalized in 2011 and have become an important tool in distributed computing. This approach is particularly useful in distributed applications where data must be updated and merged consistently across many actors, such as peer-to-peer networks or offline-first systems. + +## Concepts + +### Regular CRDTs + +Conflict-free Replicated Data Types (CRDTs) allow peers to collaborate and update data structures without explicit synchronization. They can be applied to registers, counters, sets, lists, maps, and much more. + +The key feature of CRDTs is deterministic merging. In other words, they always merge updates in a predictable way. No matter the order in which updates arrive, all peers eventually agree on the same final state. To make this possible, CRDTs keep track of when events happen, often using logical or vector clocks. These clocks store metadata for each peer, but this becomes inefficient when the number of peers is very large or constantly changing. + +### Limitations with Ordering + +In distributed systems, it is difficult to know the exact order of events across different machines. System clocks may not match, or they can even be manipulated, which leads to inconsistencies. + +Merkle CRDTs solve this by building causality directly into the structure of a Merkle Directed Acyclic Graph (Merkle DAG). This removes the need to maintain separate metadata for every peer, making the system more scalable. + +## Formalization of Merkle CRDTs + +A Merkle CRDT is built by combining a Merkle clock with a standard CRDT. The Merkle DAG ensures causality through its structure: every node includes the hash of its parent, so a new node cannot exist without its predecessor. This creates a verifiable, tamper-resistant chain of updates. + +Merkle CRDT includes: + +- **Merkle clock** – provides causality and ordering of events. +- **Embedded CRDT** – manages the type of data structure and the rules for merging updates. + +### Merkle Clock + +A Merkle clock uses the properties of Merkle DAGs, similar to blockchains. Each new node contains the identifier of its parent, creating a cryptographically verifiable chain of events that cannot be altered without detection. + +Each node also records a height value, which acts like a counter. This makes it easier to tell whether one event happened before, after, or at the same time as another. + +With these properties, a Merkle clock ensures that causality is always preserved and that the history of updates cannot be forged or tampered with. + +## Delta State Semantics + +There are two main ways to represent changes in CRDTs: operation-based and state-based. + +- **Operation-based CRDTs** send the intent of an action as the message. For example, “set the value to 10” or “increment the counter by 4.” These messages are usually small because they only contain the operation being performed. +- **State-based CRDTs** send the full resulting state as the message. For example, to set a value to 10, the message would contain the value 10 as the content. These messages are larger because they include both the current state and the change. + +Both approaches work, but each has trade-offs. Operation-based CRDTs are compact but depend on reliable delivery of every operation. State-based CRDTs are easier to reason about but become inefficient as the state grows. + +**Delta State CRDTs** combine the strengths of both. Instead of sending either the full operation or the full state, they only send the minimum change needed to move from one state to another. This small change is called a delta. + +For example, if there is a set of nine fruit names and you add “banana,” the delta message contains only the word “banana.” It does not resend the entire set of ten fruits. In this way, the message is as small as an operation-based CRDT but still captures the actual difference in state, like a state-based CRDT. + +This hybrid model is efficient and expressive, making it a practical choice for distributed systems where both bandwidth and consistency are important. + +## Branching and Merging + +Merkle CRDTs naturally support branching. When two peers update the same ancestor independently, their updates form separate branches. Each peer treats its branch as the main state, without requiring immediate resolution. This makes the system ideal for offline-first applications. + +Merging occurs when branches are brought back together. A merge node is created with multiple parents, and the embedded CRDT defines how to resolve differences between the branches. The Merkle clock ensures that the process respects causality, while the CRDT ensures that the merged state is valid and consistent. diff --git a/docs/guides/peer-to-peer.md b/docs/defradb/guides/peer-to-peer.md similarity index 97% rename from docs/guides/peer-to-peer.md rename to docs/defradb/guides/peer-to-peer.md index 5b6a52b..09de3c7 100644 --- a/docs/guides/peer-to-peer.md +++ b/docs/defradb/guides/peer-to-peer.md @@ -56,9 +56,6 @@ In passive replication, updates are broadcasted on a per-document level over the One major difference between active and passive networks is that an active network can focus on both collections and individual documents, while a passive network is only focused on individual documents. Active networks operate over a direct, point-to-point connection and allow you to select an entire collection to replicate to another node. For example, if you have a collection of books and specify a target node for active replication, the entire collection will be replicated to that node, including any updates to individual books. However, it is also possible to replicate granularly by selecting specific books within the collection for replication. Passive networks, on the other hand, are only concerned with replicating individual documents. -```bash -$ defradb client rpc addreplicator "Books" /ip4/0.0.0.0/tcp/9172/p2p/ -``` ## Concrete Features of P2P in DefraDB @@ -69,8 +66,11 @@ The Defra Command Line Interface (CLI) allows you to modify the behavior of the ```bash $ defradb start ... -2023-03-20T07:18:17.276-0400, INFO, defra.cli, Starting P2P node, {"P2P address": "/ip4/0.0.0.0/tcp/9171"} -2023-03-20T07:18:17.281-0400, INFO, defra.node, Created LibP2P host, {"PeerId": "12D3KooWEFCQ1iGMobsmNTPXb758kJkFc7XieQyGKpsuMxeDktz4", "Address": ["/ip4/0.0.0.0/tcp/9171"]} +Jan 2 10:15:49.124 INF cli Starting DefraDB +Jan 2 10:15:49.161 INF net Created LibP2P host PeerId=12D3KooWEFCQ1iGMobsmNTPXb758kJkFc7XieQyGKpsuMxeDktz4 Address=[/ip4/127.0.0.1/tcp/9171] +Jan 2 10:15:49.162 INF net Starting internal broadcaster for pubsub network +Jan 2 10:15:49.163 INF node Providing HTTP API at http://127.0.0.1:9181 PlaygroundEnabled=false +Jan 2 10:15:49.163 INF node Providing GraphQL endpoint at http://127.0.0.1:9181/api/v0/graphql ``` This host has a Peer ID, which is a function of a secret private key generated when the node is started for the first time. The Peer ID is important to know as it may be relevant for different parts of the peer-to-peer networking system. The libp2p networking stack can be enabled or disabled. @@ -110,7 +110,7 @@ When a node is started, it specifies a list of peers that it wants to stay conne To use the active replication feature in DefraDB, you can submit an add replicator Remote Procedure Call (RPC) command through the client API. You will need to specify the multi-address and Peer ID of the peer that you want to include in the replicator set, as well as the name of the collection that you want to replicate to that peer. These steps handle the process of defining which peers you want to connect to, enabling or disabling the underlying subsystems, and sending additional RPC commands to add any necessary replicators. ```bash -$ defradb client rpc addreplicator "Books" /ip4/0.0.0.0/tcp/9172/p2p/ +$ defradb client p2p replicator set -c Books ``` ## Benefits of the P2P System diff --git a/docs/defradb/guides/schema-migration.md b/docs/defradb/guides/schema-migration.md new file mode 100644 index 0000000..6df51f6 --- /dev/null +++ b/docs/defradb/guides/schema-migration.md @@ -0,0 +1,286 @@ +--- +sidebar_label: Schema Migration Guide +sidebar_position: 90 +--- +# A Guide to Schema Migration in DefraDB + +## Overview +In a database system, an application’s requirements can change at any given time, to meet this change, Schema migrations are necessary. This is where Lens comes in, as a migration engine that produces effective schema migration. + +This guide will provide an understanding of schema migrations, focusing on the Lens migration engine. Let’s dive in! + +Lens is a pipeline for user-defined transformations. It enables users to write their transformations in any programming language and run them through the Lens pipeline, which transforms the cached representation of the data. + +## Goals of the Lens Migration System + +Here are some of the goals of the Lens schema migration system: + +- **Presenting a consistent view of data across nodes**: The Lens schema migration system can present data across nodes consistently, regardless of the schema version being used. + +- **Verifiability of data**: Schema migration in the Lens migration system is presented as data, this preserves the user-defined mutations without corrupting system-defined mutations and also allows migrating from one schema version to another. + +- **A language-agnostic way of writing schema migrations**: Schema migrations can be written in any programming language and executed properly as Lens is language-agnostic. + +- **Safe usage of migrations by others through a sandbox**: Migrations written in Lens are run in a sandbox, which ensures safety and eliminates the concern for remote code executions (RCE). + +- **Peer-to-peer sync of schema migrations**: Lens allows peers to write their migrations in different application versions and sync without worrying about the versions other peers are using. + +- **Local autonomy of schema migrations**: Lens enables local autonomy in writing schema migrations by giving users control of the schema version they choose to use. The users can stay in a particular schema version and still communicate with peers on different versions, as Lens is not restricted to a particular schema version. + +- **Reproducibility and deterministic nature of executing migrations**: When using the Lens migration system, changes to schemas can be written, tagged and shared with other peers regardless of their infrastructure and requirements for deployments. + + +## Mechanism + +In this section, we’ll look at the mechanism behind the Lens migration system and explain how it works. + +Lens migration system functions as a bi-directional transformation engine, enabling the migration of data documents in both forward and reverse directions. It allows for the transformation of documents from schema X to Y in the forward direction and Y to X in the reverse direction. + +The above process is done foundationally, through a verifiable system powered by WebAssembly (Wasm). Wasm also enables the sandbox safety and language-agnostic feature of Lens. + +Internally, schema migrations are evaluated lazily. This avoids the upfront cost of doing a massive migration at once. + +*Lazy evaluation is a technique in programming where an expression is only evaluated when its value is needed.* + +Adopting lazy evaluation in the migration system also allows rapid toggling between schema versions and representations. + +## Usage + +The Lens migration system addresses critical use cases related to schema migrations in peer-to-peer, eventually consistent databases. These use cases include: + +  + +- **Safe Schema Progression**: Ensuring the seamless progression of database schemas is vital for accommodating changing application requirements. Lens facilitates the modification, upgrade, or reversion of schemas while upholding data integrity. + +- **Handling Peer-to-Peer Complexity**: In environments where different clients operate on varying application and database versions, Lens offers a solution to address the complexity of schema migrations. It ensures coherence and effectiveness across different networks. + +- **Language-Agnostic Flexibility**: Functions in Lens are designed to be language-agnostic, offering the versatility to define schema changes in the preferred programming language. This adaptability makes Lens suitable for diverse development environments and preferences. + +- **Lazy Evaluation**: Lens employs a lazy evaluation mechanism, initiating migrations without immediate execution. Schema changes are applied only when documents are read, queried, or updated. This approach reduces the upfront cost of extensive schema migrations while maintaining data consistency. + +- **On-Demand Schema Selection**: Lens supports on-demand schema selection during data queries. Users can specify the schema version they wish to work with, facilitating A/B testing and the seamless transition between different schema versions. + + + +These use cases highlight how Lens empowers users to manage schema migrations effectively, ensuring data consistency and adaptability in evolving database systems. + + +## Example + +In this example we will define a collection using a schema with an `emailAddress` field. We will then patch the schema to add a new field `email`, then define a bi-directional Lens to migrate data to/from the new field. + +**Step One**, define the `Users` collection/schema: + +```graphql +defradb client schema add ' + type Users { + emailAddress: String + } +' +``` + +**Step Two**, patch the `Users` schema, adding the new field, here we pass in `--set-active=true` to automatically apply the schema change to the `Users` collection: + +```graphql +defradb client schema patch ' + [ + { "op": "add", "path": "/Users/Fields/-", "value": {"Name": "email", "Kind": "String"} } + ] +' --set-active=true +``` + +**Step Three**, fetch the schema ids so that we can later tell Defra which schema versions we wish to migrate to/from: + +```graphql +defradb client schema describe --name="Users" +``` + +**Step Four**, in order to define our Lens module - we need to define 4 functions: + +- `next() unsignedInteger8`, this is a host function imported to the module - calling it will return a pointer to a byte array that will either contain + an error, an EndOfStream identifier (indicating that there are no more source values), or a pointer to the start of a json byte array containing the Defra document to migrate. It is typically called from within the `transform` and `inverse` functions, and can be called multiple times within them if desired. + + - `alloc(size: unsignedInteger64) unsignedInteger8`​, this is required by all lens modules regardless of language or content - this function should allocate a block of memory of the given `size` , it is used by the Lens engine to pass stuff in to the wasm instance. The memory needs to remain reserved until the next wasm call, e.g. until `transform` or `set_param` has been called. It's implementation will be different depending on which language you are working with, but it should not need to differ between modules of the same language. The Rust SDK contains an alloc function that you can call. + +- `set_param(ptr: unsignedInteger8) unsignedInteger8`​, this function is only required by modules that accept a set of parameters. As an input parameter it receives a single pointer that will point to the start of a json byte array containing the parameters defined in the configuration file. It returns a pointer to either nil, or an error message. It will be called once, when the the migration is defined in Defra (and on restart of the database). How it is implemented is up to you. + +- `transform() unsignedInteger8`​, this function is required by all Lens modules - it is the migration, and within this function you should define what the migration should do, in this example it will copy the data from the `emailAddress` field into the `email` field. Lens Modules can call the `next` function zero to many times to draw documents from the Defra datastore, however modules used in schema migrations should currently limit this to a single call per `transform` call (Lens based views may call it more or less frequently in order to filter or create documents). + +- `inverse() unsignedInteger8`​, this function is optional, you only need to define it if you wish to define the inverse migration. It follows the same pattern as the `transform` function, only you should implement it to do the reverse. In this example we want this to copy the value from the `email` field into the `emailAddress`​ field. + +Here is what our migration would look like if we were to write it in Rust: + +```graphql +#[link(wasm_import_module = "lens")] +extern "C" { + fn next() -> *mut u8; +} + +#[derive(Deserialize, Clone)] +pub struct Parameters { + pub src: String, + pub dst: String, +} + +static PARAMETERS: RwLock> = RwLock::new(None); + +#[no_mangle] +pub extern fn alloc(size: usize) -> *mut u8 { + lens_sdk::alloc(size) +} + +#[no_mangle] +pub extern fn set_param(ptr: *mut u8) -> *mut u8 { + match try_set_param(ptr) { + Ok(_) => lens_sdk::nil_ptr(), + Err(e) => lens_sdk::to_mem(lens_sdk::ERROR_TYPE_ID, &e.to_string().as_bytes()) + } +} + +fn try_set_param(ptr: *mut u8) -> Result<(), Box> { + let parameter = lens_sdk::try_from_mem::(ptr)?; + + let mut dst = PARAMETERS.write()?; + *dst = Some(parameter); + Ok(()) +} + +#[no_mangle] +pub extern fn transform() -> *mut u8 { + match try_transform() { + Ok(o) => match o { + Some(result_json) => lens_sdk::to_mem(lens_sdk::JSON_TYPE_ID, &result_json), + None => lens_sdk::nil_ptr(), + EndOfStream => lens_sdk::to_mem(lens_sdk::EOS_TYPE_ID, &[]), + }, + Err(e) => lens_sdk::to_mem(lens_sdk::ERROR_TYPE_ID, &e.to_string().as_bytes()) + } +} + +fn try_transform() -> Result>, Box> { + let ptr = unsafe { next() }; + let mut input = match lens_sdk::try_from_mem::>(ptr)? { + Some(v) => v, + // Implementations of `transform` are free to handle nil however they like. In this + // implementation we chose to return nil given a nil input. + None => return Ok(None), + EndOfStream => return Ok(EndOfStream) + }; + + let params = PARAMETERS.read()?; + + let value = input.get_mut(¶ms.src) + .ok_or(ModuleError::PropertyNotFoundError{requested: params.src.clone()})? + .clone(); + + let mut result = input.clone(); + result.insert(params.dst, value); + + let result_json = serde_json::to_vec(&result)?; + lens_sdk::free_transport_buffer(ptr)?; + Ok(Some(result_json)) +} + +#[no_mangle] +pub extern fn inverse() -> *mut u8 { + match try_inverse() { + Ok(o) => match o { + Some(result_json) => lens_sdk::to_mem(lens_sdk::JSON_TYPE_ID, &result_json), + None => lens_sdk::nil_ptr(), + EndOfStream => lens_sdk::to_mem(lens_sdk::EOS_TYPE_ID, &[]), + }, + Err(e) => lens_sdk::to_mem(lens_sdk::ERROR_TYPE_ID, &e.to_string().as_bytes()) + } +} + +fn try_inverse() -> Result>, Box> { + let ptr = unsafe { next() }; + let mut input = match lens_sdk::try_from_mem::>(ptr)? { + Some(v) => v, + // Implementations of `transform` are free to handle nil however they like. In this + // implementation we chose to return nil given a nil input. + None => return Ok(None), + EndOfStream => return Ok(EndOfStream) + }; + + let params = PARAMETERS.read()?; + + // Note: In this example `inverse` is exactly the same as `transform`, only the useage + // of `params.dst` and `params.src` is reversed. + let value = input.get_mut(¶ms.dst)?; + + let mut result = input.clone(); + result.insert(params.src, value); + + let result_json = serde_json::to_vec(&result)?; + lens_sdk::free_transport_buffer(ptr)?; + Ok(Some(result_json)) +} +``` + + + + +More fully coded example modules, including an AssemblyScript example can be found in our integration tests here: https://github.com/sourcenetwork/defradb/tree/develop/tests/lenses + +and here: https://github.com/lens-vm/lens/tree/main/tests/modules + +We should then compile it to wasm, and copy the resultant `.wasm` file to a location that the Defra node has access to. Make sure that the file is safe there, at the moment Defra will not copy it and will refer back to that location on database restart. + +**Step Five**, now that we have updated the collection, and defined our migration, we need to tell Defra to use it, by providing it the source and destination schema IDs from our earlier `defradb client schema describe`​ call, and a configuration file defining the parameters we wish to pass it: + +```graphql +defradb client schema migration set ' + { + "lenses": [ + { + "path": , + "arguments": { + "src": "emailAddress", + "dst": "email" + } + } + ] + } +' +``` + + +Now the migration has been configured! Any documents committed under the original schema version will now be returned as if they were committed using the newer schema version. + +As we have defined an inverse migration, we can give this migration to other nodes in our peer network still on the original schema version, and they will be able to query our documents committed using the new schema version applying the inverse. + +We can also change our active schema version on this node back to the original to see the inverse in action: + +```graphql +defradb client schema set-active +``` + +Now when we query Defra, any documents committed after the schema update will be rendered as if they were committed on the original schema version, with `email` field values being copied to the `emailAddress` field at query time. + +## Advantages  + +Here are some advantages of Lens as a schema migration system: + +- Lens is not bound to a particular deployment, programming language, or interaction method. It can be used globally and is accessible to clients regardless of their location or infrastructure. +- Users can query on-demand even with different schema versions. +- Migration between different schemas is a seamless process. + +## Disadvantages + +The Lens migration system also has some downsides to schema migration which include: + +- Using a Lazy execution approach, errors might be found later when querying through the migration. +- There’s a time constraint as the Lens migration system is a work in progress +- The performance of the system is secondary, with more focus on overall functionality. + +## Future Outlook + +The core problem we currently have in the Lens schema migration system is the performance issues when migrating schemas, hence for future versions, the following would be considered: + +- Increasing the performance of the migration system. +- Making migrations easier to write. +- Expansion of the schema update system to include the removal of fields, not just adding fields. +- Enabling users to query the schema version of their choice on-demand. +- Support for Eager evaluation. +- Implementing dry run testing for development and branching scenarios, and handling divergent schemas. \ No newline at end of file diff --git a/docs/guides/schema-relationship.md b/docs/defradb/guides/schema-relationship.md similarity index 96% rename from docs/guides/schema-relationship.md rename to docs/defradb/guides/schema-relationship.md index f59dabe..59745b6 100644 --- a/docs/guides/schema-relationship.md +++ b/docs/defradb/guides/schema-relationship.md @@ -73,7 +73,7 @@ type Address { ```graphql mutation { - create_Address(data: "{\"streetNumber\": \"123\", \"streetName\": \"Test road\", \"country\": \"Canada\"}") { + create_Address(input: {streetNumber: "123", streetName: "Test road", country: "Canada"}) { _key } } @@ -81,7 +81,7 @@ mutation { ```graphql mutation { - create_User(data: "{\"name\": \"Alice\", \"username\": \"awesomealice\", \"age\": 35, \"address_id\": \"bae-fd541c25-229e-5280-b44b-e5c2af3e374d\"}") { + create_User(input: {name: "Alice", username: "awesomealice", age: 35, address_id: "bae-fd541c25-229e-5280-b44b-e5c2af3e374d"}) { _key } } @@ -177,7 +177,7 @@ defradb client schema add -f schema.graphql ```graphql mutation { - create_Author(data: "{\"name\": \"Saadi\",\"dateOfBirth\": \"1210-07-23T03:46:56.647Z\"}") { + create_Author(input: {name: "Saadi", dateOfBirth: "1210-07-23T03:46:56.647Z"}) { _key } } @@ -186,7 +186,7 @@ mutation { ```graphql mutation { - create_Book(data: "{\"name\": \"Gulistan\",\"genre\": \"Poetry\", \"author_id\": \"bae-0e7c3bb5-4917-5d98-9fcf-b9db369ea6e4\"}") { + create_Book(input: {name: "Gulistan", genre: "Poetry", author_id: "bae-0e7c3bb5-4917-5d98-9fcf-b9db369ea6e4"}) { _key } } @@ -195,7 +195,7 @@ mutation { ```graphql mutation { - update_Author(id: "bae-0e7c3bb5-4917-5d98-9fcf-b9db369ea6e4", data: "{\"name\": \"Saadi Shirazi\"}") { + update_Author(id: "bae-0e7c3bb5-4917-5d98-9fcf-b9db369ea6e4", input: {name: "Saadi Shirazi"}) { _key } } @@ -204,7 +204,7 @@ mutation { ```graphql mutation { - update_Book(filter: {name: {_eq: "Gulistan"}}, data: "{\"description\": \"Persian poetry of ideas\"}") { + update_Book(filter: {name: {_eq: "Gulistan"}}, input: {description: "Persian poetry of ideas"}) { _key } } diff --git a/docs/defradb/guides/secondary-index.md b/docs/defradb/guides/secondary-index.md new file mode 100644 index 0000000..1c10478 --- /dev/null +++ b/docs/defradb/guides/secondary-index.md @@ -0,0 +1,263 @@ +--- +sidebar_label: Secondary index guide +sidebar_position: 60 +--- + +## Introduction + +DefraDB provides a powerful and flexible secondary indexing system that enables efficient document lookups and queries. + +## About + +The following sections provide an overview of performance considerations, indexing related objects, and JSON field indexing. + +### Performance considerations + +Indexes can greatly improve query performance, but they also impact system performance during writes. Each index adds write overhead since every document update must also update the relevant indexes. Despite this, the boost in read performance for indexed queries usually makes this trade-off worthwhile. + +#### To optimize performance: + +- Choose indexes based on your query patterns. Focus on fields frequently used in query filters to maximize efficiency. +- Avoid indexing rarely queried fields. Doing so adds unnecessary overhead. +- Be cautious with unique indexes. These require extra validation, making their performance impact more significant. + +Plan your indexes carefully to balance read and write performance. + +### Indexing related objects + +DefraDB supports indexing relationships between documents, allowing for efficient queries across related data. + +#### Example schema: Users and addresses + +```graphql +type User { + name: String + age: Int + address: Address @primary @index +} + +type Address { + user: User + city: String @index + street: String +} +``` + +Key indexes in this schema: + +- **City field in address:** Indexed to enable efficient queries by city. +- **Relationship between user and address**: Indexed to support fast lookups based on relationships. + +#### Query example + +The following query retrieves all users living in Montreal: + +```graphql +query { + User(filter: { + address: {city: {_eq: "Montreal"}} + }) { + name + } +} +``` + +#### How indexing improves efficiency + +**Without indexes:** +- Fetch all user documents. +- For each user, retrieve the corresponding Address. This approach becomes slow with large datasets. + +**With indexes:** +- Fetch address documents matching the city value directly. +- Retrieve the corresponding User documents. This method is much faster because indexes enable direct lookups. + +#### Enforcing relationship cardinality +Indexes can also enforce one-to-one relationships. For instance, to ensure each User has exactly one unique Address: + +```graphql +type User { + name: String + age: Int + address: Address @primary @index(unique: true) +} + +type Address { + user: User + city: String @index + street: String +} +``` + +Here, the @index(unique: true) constraint ensures no two Users can share the same Address. Without it, the relationship defaults to one-to-many, allowing multiple Users to reference a single Address. + +By combining relationship indexing with cardinality constraints, you can create highly efficient and logically consistent data structures. + +### JSON field indexing + +DefraDB offers a specialized indexing system for JSON fields, designed to handle their hierarchical structure efficiently. + +#### JSON indexing overview + +JSON fields differ from other field types (e.g., Int, String, Bool) because they are structured hierarchically. DefraDB uses a path-aware system to manage these complexities, enabling traversal and indexing of all leaf nodes in a JSON document. + +#### JSON Interface + +DefraDB's JSON interface, defined in client/json.go, is essential for managing JSON fields. It allows the system to: + +Traverse all leaf nodes in a JSON document. +Represent a JSON value as either a complete document or a single node within the structure. +Each JSON value also stores its path information, which is crucial for creating accurate and efficient indexes. + +##### Example JSON Document + +```json +{ + "user": { + "device": { + "model": "iPhone" + } + } +} +``` + +Here, the `iPhone` value is represented with its complete path: [`user`, `device`, `model`]. This path-aware representation ensures that the system knows not just the value, but where it resides within the document. + +#### Inverted Indexes for JSON +DefraDB uses inverted indexes for JSON fields. These indexes reverse the traditional "document-to-value" relationship by starting with a value and quickly locating all documents containing that value. + +#### Key Format for JSON Indexes + +``` +/(//)+/ +``` + +##### How It Differs + +- Regular fields map to a single index entry. +- JSON fields generate multiple entries—one for each leaf node, incorporating both the path and the value. + +During indexing, the system traverses the entire JSON structure, creating these detailed index entries. + +#### Value normalization in JSON +DefraDB normalizes JSON leaf values to ensure consistency in ordering and comparisons. For example: + +- JSON values include their normalized value and path information. +- Scalar types (e.g., integers) are normalized to a standard type, such as `int64`. + +This ensures that operations like filtering and sorting are reliable and efficient. + +#### How indexing works +When indexing a document with JSON fields, the system: + +1. Traverses the JSON structure using the JSON interface. +1. Generates index entries for every leaf node, combining path and normalized value. +1. Stores entries efficiently, enabling direct querying. + +##### Query example +Retrieve documents where the model is "iPhone": + +``` +query { + Collection(filter: { + jsonField: { + user: { + device: { + model: {_eq: "iPhone"} + } + } + } + }) +} +``` + +With indexes, the system directly retrieves matching documents, avoiding the need to scan and parse the JSON during queries. + +#### Benefits of JSON field indexing +- **Efficient queries**: Leverages inverted indexes for fast lookups, even in deeply nested structures. +- **Precise path tracking**: Maintains path information for accurate indexing and retrieval. +- **Scalable structure**: Handles complex JSON documents with minimal performance overhead. + +## Usage + +The `@index` directive can be used on GraphQL schema objects and field definitions to configure indexes. + +`@index(name: String, unique: Bool, direction: ORDERING, includes: [{ field: String, direction: ORDERING }])` + +### `name` +Sets the index name. Defaults to concatenated field names with direction. + +### `unique` +Makes the index unique. Defaults to false. + +### `direction` +Sets the default index direction for all fields. Can be one of ASC (ascending) or DESC (descending). Defaults to ASC. + +If a field in the includes list does not specify a direction the default direction from this value will be used instead. + +### `includes` +Sets the fields the index is created on. + +When used on a field definition and the field is not in the includes list it will be implicitly added as the first entry. + +## Examples + +### Field level usage + +Creates an index on the User name field with DESC direction. + +```gql +type User { + name: String @index(direction: DESC) +} +``` + +### Schema level usage + +Creates an index on the User name field with default direction (ASC). + +```gql +type User @index(includes: {field: "name"}) { + name: String + age: Int +} +``` + +### Unique index + +Creates a unique index on the User name field with default direction (ASC). + +```gql +type User { + name: String @index(unique: true) +} +``` + +### Composite index + +Creates a composite index on the User name and age fields with default direction (ASC). + +```gql +type User @index(includes: [{field: "name"}, {field: "age"}]) { + name: String + age: Int +} +``` + +### Relationship index + +Creates a unique index on the User relationship to Address. The unique index constraint ensures that no two Users can reference the same Address document. + +```gql +type User { + name: String + age: Int + address: Address @primary @index(unique: true) +} + +type Address { + user: User + city: String + street: String +} +``` diff --git a/docs/guides/time-traveling-queries.md b/docs/defradb/guides/time-traveling-queries.md similarity index 98% rename from docs/guides/time-traveling-queries.md rename to docs/defradb/guides/time-traveling-queries.md index 0c95286..0d3b6d1 100644 --- a/docs/guides/time-traveling-queries.md +++ b/docs/defradb/guides/time-traveling-queries.md @@ -16,12 +16,12 @@ The Web2 stack has traditional databases, like Postgres or MySQL, that usually h A powerful feature of a time-traveling query is that very little work is required from the developer to turn a traditional non-time-traveling query into a time-traveling query. Each update a document goes through gets a version identifier known as a Content Identifier (CID). CIDs are a function of the data model and are used to build out the time travel queries. These CIDs can be used to refer to a version that contains some piece of data. Instead of using some sort of human-invented notion of semantic version labels like Version 1, or Version 3.1 alpha, it uses the hash of the data as the actual identifier. The user can take the entire state of a document and create a single constant-sized CID. Each update in the document produces a new version number for the document, including a new version number for its individual fields. The developer then only needs to submit a new time-traveling query using the doc key of the document that it wants to query backward through its state, just like in a regular query, only here the developer needs to add the 32-bit hexadecimal version identifier that is expressed as it’s CID in an additional argument and the query will fetch the specific update that was made in the document. ```graphql -# Here we fetch a User of the given dockey, in the state that it was at +# Here we fetch a User of the given docID, in the state that it was at # at the commit matching the given CID. query { User ( cid: "bafybeieqnthjlvr64aodivtvtwgqelpjjvkmceyz4aqerkk5h23kjoivmu", - dockey: "bae-52b9170d-b77a-5887-b877-cbdbb99b009f" + docID: "bae-d4303725-7db9-53d2-b324-f3ee44020e52" ) { name age @@ -33,7 +33,7 @@ query { The mechanism behind time-traveling queries is based on the Merkel CRDT system and the data model of the documents discussed in the above sections. Each time a document is updated, a log of updates known as the Update graph is recorded. This graph consists of every update that the user makes in the document from the beginning till the Nth update. In addition to the document update graph, we also have an independent and individual update graph for each field of the document. The document update graph would capture the overall updates made in the document whereas the independent and individual update graphs would capture the changes made to a specific field of the document. The data model as discussed in the Usage section works in a similar fashion, where it keeps appending the updates of the document to its present state. So even if a user deletes any information in the document, this will be recorded as an update within the update graph. Hence, no information gets deleted from the document, as all updates are stored in the update graph. -[Include link to CRDT doc here] +[Merkle CRDT Guide](./merkle-crdt.md) Since we now have this update graph of changes, the query also takes its mechanism from the inherent properties of the Delta State Merkel CRDTs. Under this, the actual content of the update added by the user in the document is known as the Delta Payload. This delta payload is the amount of information that a user wants to go from a previous state to the very next state, where the value of the next state is set by some other user. For example, suppose a team of developers is working on a document and one of them wants to change the name of the document, then in this case, the delta payload of the new update would be the name of the document set by that user. Hence, the time-traveling queries work on two core concepts, the appending update graph and the delta payload which contains information that is required to go from the previous state to the next state. With both of these, whenever a user submits a regular query, the query caches the present state of the document within the database. And we internally issue a time-traveling query for the current state, with the only upside being that the user can submit a non-time-traveling query faster since a cached version of the same is already stored in the database. Thus, using this cached version of the present state of the document, the user can apply a time-traveling query using the CID of the specific version they want to query in the document. The database will then set the CID provided by the user as the Target State, a state at which the query will stop and go back to the beginning of the document, known as the Genesis State. The query will then apply all its operations until it reaches back to the Target State. diff --git a/docs/references/_category_.json b/docs/defradb/references/_category_.json similarity index 100% rename from docs/references/_category_.json rename to docs/defradb/references/_category_.json diff --git a/docs/defradb/references/acp.md b/docs/defradb/references/acp.md new file mode 100644 index 0000000..9e362e3 --- /dev/null +++ b/docs/defradb/references/acp.md @@ -0,0 +1,795 @@ +--- +sidebar_label: ACP +sidebar_position: 0 +--- + +# Introduction + +In the realm of information technology (IT) and cybersecurity, **access control** plays a pivotal role in ensuring the confidentiality, integrity, and availability of sensitive resources. Let's delve into why access control policies are crucial for protecting your valuable data. + +## What Is Access Control? + +**Access control** is a mechanism that regulates who or what can view, use, or access a specific resource within a computing environment. Its primary goal is to minimize security risks by ensuring that only **authorized users**, systems, or services have access to the resources they need. But it's more than just granting or denying access, it involves several key components: + +1. **Authentication**: Verifying the identity of an individual or system. +2. **Authorization**: Determining what actions or operations an actor is allowed to perform. +3. **Access**: Granting or denying access based on authorization. +4. **Management**: Administering access rights and permissions. +5. **Audit**: Tracking and monitoring access patterns for accountability. + +## Why Is Access Control Important? + +1. **Mitigating Security Risks**: Cybercriminals are becoming increasingly sophisticated, employing advanced techniques to breach security systems. By controlling who has access to your database, you significantly reduce the risk of unauthorized access, both from external attackers and insider threats. + +2. **Compliance with Regulations**: Various regulatory requirements, such as the **General Data Protection Regulation (GDPR)** and the **Health Insurance Portability and Accountability Act (HIPAA)**, mandate stringent access control measures to protect personal data. Implementing access control ensures compliance with these regulations. + +3. **Preventing Data Breaches**: Access control acts as a proactive measure to deter, detect, and prevent unauthorized access. It ensures that only those with the necessary permissions can access sensitive data or services. + +4. **Managing Complexity**: Modern IT infrastructure, including cloud computing and mobile devices, has exponentially increased the number of access points. Technologies like **identity and access management (IAM)** and approaches like **zero trust** help manage this complexity effectively. + +## Types of Security Access Controls + +Several access control models exist, including: + +- **Role-Based Access Control (RBAC)**: Assigns permissions to roles, roles then are granted to users. A user's active role then defines their access. (e.g., admin, user, manager). +- **Attribute-Based Access Control (ABAC)**: Considers various attributes (e.g., user attributes, resource attributes) for access decisions. +- **Discretionary Access Control (DAC)**: Users with sufficient permissions (resource owners) are to grant / share an object with other users. +- **Mandatory Access Control (MAC)**: Users are not allowed to grant access to other users. Permissions are granted based on a minimum role / hierarchy (security labels and clearances) that must be met. +- **Policy-Based Access Control (PBAC)**: Enforces access based on defined policies. +- **Relation-Based Access Control (ReBac)**: Relations between objects and users in the system are used to derive their permissions. + +- Note: **DefraDB** access control rules strongly resembles **Discretionary Access Control (DAC)**, which is implemented through a **Relation-Based Access Control System (ReBac) Engine** + +## Challenges of Access Control in Cybersecurity + +- **Distributed IT Environments**: Cloud computing and remote work create new challenges. +- **Rise of Mobility**: Mobile devices in the workplace add complexity. +- **Password Fatigue**: Balancing security with usability. +- **Data Governance**: Ensuring visibility and control. +- **Multi-Tenancy**: Managing complex permissions in SaaS applications. + +## Key takeaway +A robust access control policy system is your first line of defense against unauthorized access and data breaches. + + +# DefraDB's Access Control System + +## ReBac Authorization Model + +### Zanzibar +In 2019, Google published their [Zanzibar](https://research.google/pubs/zanzibar-googles-consistent-global-authorization-system/) paper, a paper explaining how they handle authorization across their many services. It uses access control lists but with relationship-based access control rather than role-based access control. Relationship-Based Access Control (ReBAC) establishes an authorization model where a subject's permission to access an object is defined by the presence of relationships between those subjects and objects. +The way Zanzibar works is it exposes an API with (mainly) operations to manage `Relationships` (`tuples`) and Verify Access Requests (can Bob do X) through the `Check` call. A `tuple` includes subject, relation, and object. The Check call performs Graph Search over the `tuples` to find a path between the user and the object, if such a path exist then according to `RelBAC` the user has the queried permission. It operates as a Consistent and Partition-Tolerant System. + +### Zanzi +However the Zanzibar API is centralized, so we (Source Network) created a decentralized implementation of Zanzibar called **Zanzi**. Which is powered by our SourceHub trust protocol. Zanzi is a general purpose Zanzibar implementation which operates over a KV persistence layer. + +### SourceHub ACP Module +DefraDB wraps the `local` and `remote` SourceHub ACP Modules to bring all that magic to DefraDB. + +In order to setup the relation based access control, SourceHub requires an agreed upon contract which models the `relations`, `permissions`, and `actors`. That contract is refered to as a `SourceHub Policy`. The policy model's all the `relations` and `permissions` under a `resource`. +A `resource` corresponds to that "thing" that we want to gate the access control around. This can be a `Type`, `Container`, `Schema`, `Shape` or anything that has Objects that need access control. Once the policy is finalized, it has to be uploaded to the `SourceHub Module` so it can be used. +Once the `Policy` is uploaded to the `SourceHub Module` then an `Actor` can begin registering the `Object` for access control by linking to a `Resource` that exists on the uploaded `Policy`. +After the `Object` is registered successfully, the `Actor` will then get a special built-in relation with that `Object` called the `"owner"` relation. This relation is given to the `Registerer` of an `Object`. +Then an `Actor` can issue `Check` calls to see if they have access to an `Object`. + +## Document Access Control (DAC) +In DefraDB's case we wanted to gate access control around the `Documents` that belonged to a specific `Collection`. Here, the `Collection` (i.e. the type/shape of the `Object`) can be thought of as the `Resource`, and the `Documents` are the `Objects`. + + +## Field Access Control (FAC) (coming soon) +We also want the ability to do a more granular access control than just DAC. Therefore we have `Field` level access control for situations where some fields of a `Document` need to be private, while others do not. In this case the `Document` becomes the `Resource` and the `Fields` are the `Objects` being gated. + + +## Admin Access Control (AAC) (coming soon) +We also want to model access control around the `Admin Level Operations` that exist in `DefraDB`. In this case the entire `Database` would be the `Resource` and the `Admin Level Operations` are the `Objects` being gated. + +A non-exhastive list of some operations only admins should have access for: +- Ability to turnoff ACP +- Ability to interact with the P2P system + +## SourceHub Policies Are Too Flexible +SourceHub Policies are too flexible (atleast until the ability to define `Meta Policies` is implemented). This is because SourceHub leaves it up to the user to specify any type of `Permissions` and `Relations`. However for DefraDB, there are certain guarantees that **MUST** be maintained in order for the `Policy` to be effective. For example the user can input any name for a `Permission`, or `Relation` that DefraDB has no knowledge of. Another example is when a user might make a `Policy` that does not give any `Permission` to the `owner`. Which means in the case of DAC no one will have any access to the `Document` they created. +Therefore There was a very clear need to define some rules while writing a `Resource` in a `Policy` which will be used with DefraDB's DAC, FAC, or AAC. These rules will guarantee that certain `Required Permissions` will always be there on a `Resource` and that `Owner` has the correct `Permissions`. + +We call these rules DPI A.K.A DefraDB Policy Interface. + +## Terminologies +- 'SourceHub Address' is a `Bech32` Address with a specific SourceHub prefix. +- 'Identity' is a combination of SourceHub Address and a Key-Pair Signature. +- 'DPI' means 'DefraDB Policy Interface'. +- 'Partially-DPI' policy means a policy with at least one DPI compliant resource. +- 'Permissioned Collection' means to have a policy on the collection, like: `@policy(id:".." resource: "..")` +- 'Permissioned Request' means to have a request with a SourceHub Identity. + + +## DAC DPI Rules + +To qualify as a DPI-compliant `resource`, the following rules **MUST** be satisfied: +- The resource **must include** the mandatory `registerer` (`owner`) relation within the `relations` attribute. +- The resource **must encompass** all the required permissions under the `permissions` attribute. +- Every required permission must have the required registerer relation (`owner`) in `expr`. +- The required registerer relation **must be positioned** as the leading (first) relation in `expr` (see example below). +- Any relation after the required registerer relation must only be a union set operation (`+`). + +For a `Policy` to be `DPI` compliant for DAC, all of its `resources` must be DPI compliant. +To be `Partially-DPI` at least one of its `resource` must be DPI compliant. + +### More Into The Weeds: + +All mandatory permissions are: +- Specified in the `dpi.go` file within the variable `dpiRequiredPermissions`. + +The name of the required 'registerer' relation is: +- Specified in the `dpi.go` file within the variable `requiredRegistererRelationName`. + +### DPI Resource Examples: +- Check out tests here: [tests/integration/acp/schema/add_dpi](/tests/integration/acp/schema/add_dpi) +- The tests linked are broken into `accept_*_test.go` and `reject_*_test.go` files. +- Accepted tests document the valid DPIs (as the schema is accepted). +- Rejected tests document invalid DPIs (as the schema is rejected). +- There are also some Partially-DPI tests that are both accepted and rejected depending on the resource. + +### Required Permission's Expression: +Even though the following expressions are valid generic policy expressions, they will make a +DPI compliant resource lose its DPI status as these expressions are not in accordance to +our DPI [rules](#dac-dpi-rules). Assuming these `expr` are under a required permission label: +- `expr: owner-owner` +- `expr: owner-reader` +- `expr: owner&reader` +- `expr: owner - reader` +- `expr: ownerMalicious + owner` +- `expr: ownerMalicious` +- `expr: owner_new` +- `expr: reader+owner` +- `expr: reader-owner` +- `expr: reader - owner` + +Here are some valid expression examples. Assuming these `expr` are under a required permission label: +- `expr: owner` +- `expr: owner + reader` +- `expr: owner +reader` +- `expr: owner+reader` + +## DAC Usage CLI: + +### Authentication + +To perform authenticated operations you will need to generate a `secp256k1` key pair. + +The command below will generate a new secp256k1 private key and print the 256 bit X coordinate as a hexadecimal value. + +```sh +openssl ecparam -name secp256k1 -genkey | openssl ec -text -noout | head -n5 | tail -n3 | tr -d '\n:\ ' +``` + +Copy the private key hex from the output. + +```sh +read EC key +e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Use the private key to generate authentication tokens for each request. + +```sh +defradb client ... --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +### Adding a Policy: + +We have in `examples/dpi_policy/user_dpi_policy.yml`: +```yaml +description: A Valid DefraDB Policy Interface (DPI) + +actor: + name: actor + +resources: + users: + permissions: + read: + expr: owner + reader + write: + expr: owner + + relations: + owner: + types: + - actor + reader: + types: + - actor +``` + +CLI Command: +```sh +defradb client acp policy add -f examples/dpi_policy/user_dpi_policy.yml --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "PolicyID": "50d354a91ab1b8fce8a0ae4693de7616fb1d82cfc540f25cfbe11eb0195a5765" +} +``` + +### Add schema, linking to a resource within the policy we added: + +We have in `examples/schema/permissioned/users.graphql`: +```graphql +type Users @policy( + id: "50d354a91ab1b8fce8a0ae4693de7616fb1d82cfc540f25cfbe11eb0195a5765", + resource: "users" +) { + name: String + age: Int +} +``` + +CLI Command: +```sh +defradb client schema add -f examples/schema/permissioned/users.graphql +``` + +Result: +```json +[ + { + "Name": "Users", + "ID": 1, + "RootID": 1, + "SchemaVersionID": "bafkreihhd6bqrjhl5zidwztgxzeseveplv3cj3fwtn3unjkdx7j2vr2vrq", + "Sources": [], + "Fields": [ + { + "Name": "_docID", + "ID": 0 + }, + { + "Name": "age", + "ID": 1 + }, + { + "Name": "name", + "ID": 2 + } + ], + "Indexes": [], + "Policy": { + "ID": "50d354a91ab1b8fce8a0ae4693de7616fb1d82cfc540f25cfbe11eb0195a5765", + "ResourceName": "users" + } + } +] + +``` + +### Create private documents (with identity) + +CLI Command: +```sh +defradb client collection create --name Users '[{ "name": "SecretShahzad" }, { "name": "SecretLone" }]' --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +### Create public documents (without identity) + +CLI Command: +```sh +defradb client collection create --name Users '[{ "name": "PublicShahzad" }, { "name": "PublicLone" }]' +``` + +### Get all docIDs without an identity (shows only public): +CLI Command: +```sh +defradb client collection docIDs --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "docID": "bae-63ba68c9-78cb-5060-ab03-53ead1ec5b83", + "error": "" +} +{ + "docID": "bae-ba315e98-fb37-5225-8a3b-34a1c75cba9e", + "error": "" +} +``` + + +### Get all docIDs with an identity (shows public and owned documents): +```sh +defradb client collection docIDs --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "docID": "bae-63ba68c9-78cb-5060-ab03-53ead1ec5b83", + "error": "" +} +{ + "docID": "bae-a5830219-b8e7-5791-9836-2e494816fc0a", + "error": "" +} +{ + "docID": "bae-ba315e98-fb37-5225-8a3b-34a1c75cba9e", + "error": "" +} +{ + "docID": "bae-eafad571-e40c-55a7-bc41-3cf7d61ee891", + "error": "" +} +``` + + +### Access the private document (including field names): +CLI Command: +```sh +defradb client collection get --name Users "bae-a5830219-b8e7-5791-9836-2e494816fc0a" --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "_docID": "bae-a5830219-b8e7-5791-9836-2e494816fc0a", + "name": "SecretShahzad" +} +``` + +### Accessing the private document without an identity: +CLI Command: +```sh +defradb client collection get --name Users "bae-a5830219-b8e7-5791-9836-2e494816fc0a" +``` + +Error: +``` + Error: document not found or not authorized to access +``` + +### Accessing the private document with wrong identity: +CLI Command: +```sh +defradb client collection get --name Users "bae-a5830219-b8e7-5791-9836-2e494816fc0a" --identity 4d092126012ebaf56161716018a71630d99443d9d5217e9d8502bb5c5456f2c5 +``` + +Error: +``` + Error: document not found or not authorized to access +``` + +### Update private document: +CLI Command: +```sh +defradb client collection update --name Users --docID "bae-a5830219-b8e7-5791-9836-2e494816fc0a" --updater '{ "name": "SecretUpdatedShahzad" }' --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "Count": 1, + "DocIDs": [ + "bae-a5830219-b8e7-5791-9836-2e494816fc0a" + ] +} +``` + +#### Check if it actually got updated: +CLI Command: +```sh +defradb client collection get --name Users "bae-a5830219-b8e7-5791-9836-2e494816fc0a" --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "_docID": "bae-a5830219-b8e7-5791-9836-2e494816fc0a", + "name": "SecretUpdatedShahzad" +} +``` + +### Update With Filter example (coming soon) + +### Delete private document: +CLI Command: +```sh +defradb client collection delete --name Users --docID "bae-a5830219-b8e7-5791-9836-2e494816fc0a" --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "Count": 1, + "DocIDs": [ + "bae-a5830219-b8e7-5791-9836-2e494816fc0a" + ] +} +``` + +#### Check if it actually got deleted: +CLI Command: +```sh +defradb client collection get --name Users "bae-a5830219-b8e7-5791-9836-2e494816fc0a" --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Error: +``` + Error: document not found or not authorized to access +``` + +### Delete With Filter example (coming soon) + +### Typejoin example (coming soon) + +### View example (coming soon) + +### P2P example (coming soon) + +### Backup / Import example (coming soon) + +### Secondary Indexes example (coming soon) + +### Execute Explain example (coming soon) + +### Sharing Private Documents With Others + +To share a document (or grant a more restricted access) with another actor, we must add a relationship between the +actor and the document. Inorder to make the relationship we require all of the following: + +1) **Target DocID**: The `docID` of the document we want to make a relationship for. +2) **Collection Name**: The name of the collection that has the `Target DocID`. +3) **Relation Name**: The type of relation (name must be defined within the linked policy on collection). +4) **Target Identity**: The identity of the actor the relationship is being made with. +5) **Requesting Identity**: The identity of the actor that is making the request. + +Note: + - ACP must be available (i.e. ACP can not be disabled). + - The collection with the target document must have a valid policy and resource linked. + - The target document must be registered with ACP already (private document). + - The requesting identity MUST either be the owner OR the manager (manages the relation) of the resource. + - If the specified relation was not granted the miminum DPI permissions (read or write) within the policy, + and a relationship is formed, the subject/actor will still not be able to access (read or write) the resource. + - If the relationship already exists, then it will just be a no-op. + +Consider the following policy that we have under `examples/dpi_policy/user_dpi_policy_with_manages.yml`: + +```yaml +name: An Example Policy + +description: A Policy + +actor: + name: actor + +resources: + users: + permissions: + read: + expr: owner + reader + writer + + write: + expr: owner + writer + + nothing: + expr: dummy + + relations: + owner: + types: + - actor + + reader: + types: + - actor + + writer: + types: + - actor + + admin: + manages: + - reader + types: + - actor + + dummy: + types: + - actor +``` + +Add the policy: +```sh +defradb client acp policy add -f examples/dpi_policy/user_dpi_policy_with_manages.yml \ +--identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "PolicyID": "ec11b7e29a4e195f95787e2ec9b65af134718d16a2c9cd655b5e04562d1cabf9" +} +``` + +Add schema, linking to the users resource and our policyID: +```sh +defradb client schema add ' +type Users @policy( + id: "ec11b7e29a4e195f95787e2ec9b65af134718d16a2c9cd655b5e04562d1cabf9", + resource: "users" +) { + name: String + age: Int +} +' +``` + +Result: +```json +[ + { + "Name": "Users", + "ID": 1, + "RootID": 1, + "SchemaVersionID": "bafkreihhd6bqrjhl5zidwztgxzeseveplv3cj3fwtn3unjkdx7j2vr2vrq", + "Sources": [], + "Fields": [ + { + "Name": "_docID", + "ID": 0, + "Kind": null, + "RelationName": null, + "DefaultValue": null + }, + { + "Name": "age", + "ID": 1, + "Kind": null, + "RelationName": null, + "DefaultValue": null + }, + { + "Name": "name", + "ID": 2, + "Kind": null, + "RelationName": null, + "DefaultValue": null + } + ], + "Indexes": [], + "Policy": { + "ID": "ec11b7e29a4e195f95787e2ec9b65af134718d16a2c9cd655b5e04562d1cabf9", + "ResourceName": "users" + }, + "IsMaterialized": true + } +] +``` + +Create a private document: +```sh +defradb client collection create --name Users '[{ "name": "SecretShahzadLone" }]' \ +--identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Only the owner can see it: +```sh +defradb client collection docIDs --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "docID": "bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c", + "error": "" +} +``` + +Another actor can not: +```sh +defradb client collection docIDs --identity 4d092126012ebaf56161716018a71630d99443d9d5217e9d8502bb5c5456f2c5 +``` + +**Result is empty from the above command** + + +Now let's make the other actor a reader of the document by adding a relationship: +```sh +defradb client acp relationship add \ +--collection Users \ +--docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ +--relation reader \ +--actor did:key:z7r8os2G88XXBNBTLj3kFR5rzUJ4VAesbX7PgsA68ak9B5RYcXF5EZEmjRzzinZndPSSwujXb4XKHG6vmKEFG6ZfsfcQn \ +--identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "ExistedAlready": false +} +``` + +**Note: If the same relationship is created again the `ExistedAlready` would then be true, indicating no-op** + +Now the other actor can read: +```sh +defradb client collection docIDs --identity 4d092126012ebaf56161716018a71630d99443d9d5217e9d8502bb5c5456f2c5 +``` + +Result: +```json +{ + "docID": "bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c", + "error": "" +} +``` + +But, they still can not perform an update as they were only granted a read permission (through `reader` relation): +```sh +defradb client collection update --name Users --docID "bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c" \ +--identity 4d092126012ebaf56161716018a71630d99443d9d5217e9d8502bb5c5456f2c5 '{ "name": "SecretUpdatedShahzad" }' +``` + +Result: +```sh +Error: document not found or not authorized to access +``` + +Sometimes we might want to give a specific access (i.e. form a relationship) not just with one identity, but with +any identity (includes even requests with no-identity). +In that case we can specify "*" instead of specifying an explicit `actor`: +```sh +defradb client acp relationship add \ +--collection Users \ +--docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ +--relation reader \ +--actor "*" \ +--identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "ExistedAlready": false +} +``` + +**Note: specifying `*` does not overwrite any previous formed relationships, they will remain as is ** + +### Revoking Access To Private Documents + +To revoke access to a document for an actor, we must delete the relationship between the +actor and the document. Inorder to delete the relationship we require all of the following: + +1) Target DocID: The docID of the document we want to delete a relationship for. +2) Collection Name: The name of the collection that has the Target DocID. +3) Relation Name: The type of relation (name must be defined within the linked policy on collection). +4) Target Identity: The identity of the actor the relationship is being deleted for. +5) Requesting Identity: The identity of the actor that is making the request. + +Notes: + - ACP must be available (i.e. ACP can not be disabled). + - The target document must be registered with ACP already (policy & resource specified). + - The requesting identity MUST either be the owner OR the manager (manages the relation) of the resource. + - If the relationship record was not found, then it will be a no-op. + +Consider the same policy and added relationship from the previous example in the section above where we learnt +how to share the document with other actors. + +We made the document accessible to an actor by adding a relationship: +```sh +defradb client acp relationship add \ +--collection Users \ +--docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ +--relation reader \ +--actor did:key:z7r8os2G88XXBNBTLj3kFR5rzUJ4VAesbX7PgsA68ak9B5RYcXF5EZEmjRzzinZndPSSwujXb4XKHG6vmKEFG6ZfsfcQn \ +--identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "ExistedAlready": false +} +``` + +Similarly, inorder to revoke access to a document we have the following command to delete the relationship: +```sh +defradb client acp relationship delete \ +--collection Users \ +--docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ +--relation reader \ +--actor did:key:z7r8os2G88XXBNBTLj3kFR5rzUJ4VAesbX7PgsA68ak9B5RYcXF5EZEmjRzzinZndPSSwujXb4XKHG6vmKEFG6ZfsfcQn \ +--identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "RecordFound": true +} +``` + +**Note: If the same relationship is deleted again (or a record for a relationship does not exist) then the `RecordFound` +would be false, indicating no-op** + +Now the other actor can no longer read: +```sh +defradb client collection docIDs --identity 4d092126012ebaf56161716018a71630d99443d9d5217e9d8502bb5c5456f2c5 +``` + +**Result is empty from the above command** + +We can also revoke the previously granted implicit relationship which gave all actors access using the "*" actor. +Similarly we can just specify "*" to revoke all access given to actors implicitly through this relationship: +```sh +defradb client acp relationship delete \ +--collection Users \ +--docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ +--relation reader \ +--actor "*" \ +--identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac +``` + +Result: +```json +{ + "RecordFound": true +} +``` + +**Note: Deleting with`*` does not remove any explicitly formed relationships, they will remain as they were ** + +## DAC Usage HTTP: + +### Authentication + +To perform authenticated operations you will need to build and sign a JWT token with the following required fields: + +- `sub` public key of the identity +- `aud` host name of the defradb api +- The `exp` and `nbf` fields should also be set to short-lived durations. + +Additionally, if using SourceHub ACP, the following must be set: +- `iss` should be set to the user's DID, e.g. `"did:key:z6MkkHsQbp3tXECqmUJoCJwyuxSKn1BDF1RHzwDGg9tHbXKw"` +- `iat` should be set to the current unix timestamp +- `authorized_account` should be set to the SourceHub address of the account signing SourceHub transactions on your + behalf - WARNING - this will currently enable this account to make any SourceHub as your user for the lifetime of the + token, so please only set this if you fully trust the node/account. + +The JWT must be signed with the `secp256k1` private key of the identity you wish to perform actions as. + +The signed token must be set on the `Authorization` header of the HTTP request with the `bearer ` prefix prepended to it. + +If authentication fails for any reason a `403` forbidden response will be returned. + +## _AAC DPI Rules (coming soon)_ +## _AAC Usage: (coming soon)_ + +## _FAC DPI Rules (coming soon)_ +## _FAC Usage: (coming soon)_ + +## Warning / Caveats +- If using Local ACP, P2P will only work with collections that do not have a policy assigned. If you wish to use ACP +on collections connected to a multi-node network, please use SourceHub ACP. + +The following features currently don't work with ACP, they are being actively worked on. +- [Adding Secondary Indexes](https://github.com/sourcenetwork/defradb/issues/2365) +- [Backing/Restoring Private Documents](https://github.com/sourcenetwork/defradb/issues/2430) + +The following features may have undefined/unstable behavior until they are properly tested: +- [Views](https://github.com/sourcenetwork/defradb/issues/2018) +- [Average Operations](https://github.com/sourcenetwork/defradb/issues/2475) +- [Count Operations](https://github.com/sourcenetwork/defradb/issues/2474) +- [Group Operations](https://github.com/sourcenetwork/defradb/issues/2473) +- [Limit Operations](https://github.com/sourcenetwork/defradb/issues/2472) +- [Order Operations](https://github.com/sourcenetwork/defradb/issues/2471) +- [Sum Operations](https://github.com/sourcenetwork/defradb/issues/2470) +- [Dag/Commit Operations](https://github.com/sourcenetwork/defradb/issues/2469) +- [Delete With Filter Operations](https://github.com/sourcenetwork/defradb/issues/2468) +- [Update With Filter Operations](https://github.com/sourcenetwork/defradb/issues/2467) +- [Type Join Many Operations](https://github.com/sourcenetwork/defradb/issues/2466) +- [Type Join One Operations](https://github.com/sourcenetwork/defradb/issues/2466) +- [Parallel Operations](https://github.com/sourcenetwork/defradb/issues/2465) +- [Execute Explain](https://github.com/sourcenetwork/defradb/issues/2464) diff --git a/docs/references/cli/_category_.json b/docs/defradb/references/cli/_category_.json similarity index 100% rename from docs/references/cli/_category_.json rename to docs/defradb/references/cli/_category_.json diff --git a/docs/defradb/references/cli/defradb.md b/docs/defradb/references/cli/defradb.md new file mode 100644 index 0000000..9def574 --- /dev/null +++ b/docs/defradb/references/cli/defradb.md @@ -0,0 +1,41 @@ +## defradb + +DefraDB Edge Database + +### Synopsis + +DefraDB is the edge database to power the user-centric future. + +Start a DefraDB node, interact with a local or remote node, and much more. + + +### Options + +``` + -h, --help help for defradb + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb identity](defradb_identity.md) - Interact with identity features of DefraDB instance +* [defradb keyring](defradb_keyring.md) - Manage DefraDB private keys +* [defradb server-dump](defradb_server-dump.md) - Dumps the state of the entire database +* [defradb start](defradb_start.md) - Start a DefraDB node +* [defradb version](defradb_version.md) - Display the version information of DefraDB and its components + diff --git a/docs/defradb/references/cli/defradb_client.md b/docs/defradb/references/cli/defradb_client.md new file mode 100644 index 0000000..c23547e --- /dev/null +++ b/docs/defradb/references/cli/defradb_client.md @@ -0,0 +1,53 @@ +## defradb client + +Interact with a DefraDB node + +### Synopsis + +Interact with a DefraDB node. +Execute queries, add schema types, obtain node info, etc. + +### Options + +``` + -h, --help help for client + -i, --identity string Hex formatted private key used to authenticate with ACP + --tx uint Transaction ID +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb](defradb.md) - DefraDB Edge Database +* [defradb client acp](defradb_client_acp.md) - Interact with the access control system of a DefraDB node +* [defradb client backup](defradb_client_backup.md) - Interact with the backup utility +* [defradb client collection](defradb_client_collection.md) - Interact with a collection. +* [defradb client dump](defradb_client_dump.md) - Dump the contents of DefraDB node-side +* [defradb client index](defradb_client_index.md) - Manage collections' indexes of a running DefraDB instance +* [defradb client node-identity](defradb_client_node-identity.md) - Get the public information about the node's identity +* [defradb client p2p](defradb_client_p2p.md) - Interact with the DefraDB P2P system +* [defradb client purge](defradb_client_purge.md) - Delete all persisted data and restart +* [defradb client query](defradb_client_query.md) - Send a DefraDB GraphQL query request +* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a DefraDB node +* [defradb client tx](defradb_client_tx.md) - Create, commit, and discard DefraDB transactions +* [defradb client view](defradb_client_view.md) - Manage views within a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_acp.md b/docs/defradb/references/cli/defradb_client_acp.md new file mode 100644 index 0000000..d2ffce5 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_acp.md @@ -0,0 +1,46 @@ +## defradb client acp + +Interact with the access control system of a DefraDB node + +### Synopsis + +Interact with the access control system of a DefraDB node + +Learn more about [ACP](/acp/README.md) + + + +### Options + +``` + -h, --help help for acp +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb client acp policy](defradb_client_acp_policy.md) - Interact with the acp policy features of DefraDB instance +* [defradb client acp relationship](defradb_client_acp_relationship.md) - Interact with the acp relationship features of DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_acp_policy.md b/docs/defradb/references/cli/defradb_client_acp_policy.md new file mode 100644 index 0000000..c0c8d6e --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_acp_policy.md @@ -0,0 +1,41 @@ +## defradb client acp policy + +Interact with the acp policy features of DefraDB instance + +### Synopsis + +Interact with the acp policy features of DefraDB instance + +### Options + +``` + -h, --help help for policy +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client acp](defradb_client_acp.md) - Interact with the access control system of a DefraDB node +* [defradb client acp policy add](defradb_client_acp_policy_add.md) - Add new policy + diff --git a/docs/defradb/references/cli/defradb_client_acp_policy_add.md b/docs/defradb/references/cli/defradb_client_acp_policy_add.md new file mode 100644 index 0000000..bef3750 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_acp_policy_add.md @@ -0,0 +1,91 @@ +## defradb client acp policy add + +Add new policy + +### Synopsis + +Add new policy + +Notes: + - Can not add a policy without specifying an identity. + - ACP must be available (i.e. ACP can not be disabled). + - A non-DPI policy will be accepted (will be registered with acp system). + - But only a valid DPI policyID & resource can be specified on a schema. + - DPI validation happens when attempting to add a schema with '@policy'. + - Learn more about [ACP & DPI Rules](/acp/README.md) + +Example: add from an argument string: + defradb client acp policy add -i 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f \ +' +description: A Valid DefraDB Policy Interface + +actor: + name: actor + +resources: + users: + permissions: + read: + expr: owner + reader + write: + expr: owner + + relations: + owner: + types: + - actor + reader: + types: + - actor +' + +Example: add from file: + defradb client acp policy add -f policy.yml \ + -i 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f + +Example: add from file, verbose flags: + defradb client acp policy add --file policy.yml \ + --identity 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f + +Example: add from stdin: + cat policy.yml | defradb client acp policy add - + + + +``` +defradb client acp policy add [-i --identity] [policy] [flags] +``` + +### Options + +``` + -f, --file string File to load a policy from + -h, --help help for add +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client acp policy](defradb_client_acp_policy.md) - Interact with the acp policy features of DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_acp_relationship.md b/docs/defradb/references/cli/defradb_client_acp_relationship.md new file mode 100644 index 0000000..2518f6c --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_acp_relationship.md @@ -0,0 +1,42 @@ +## defradb client acp relationship + +Interact with the acp relationship features of DefraDB instance + +### Synopsis + +Interact with the acp relationship features of DefraDB instance + +### Options + +``` + -h, --help help for relationship +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client acp](defradb_client_acp.md) - Interact with the access control system of a DefraDB node +* [defradb client acp relationship add](defradb_client_acp_relationship_add.md) - Add new relationship +* [defradb client acp relationship delete](defradb_client_acp_relationship_delete.md) - Delete relationship + diff --git a/docs/defradb/references/cli/defradb_client_acp_relationship_add.md b/docs/defradb/references/cli/defradb_client_acp_relationship_add.md new file mode 100644 index 0000000..f3313b4 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_acp_relationship_add.md @@ -0,0 +1,89 @@ +## defradb client acp relationship add + +Add new relationship + +### Synopsis + +Add new relationship + +To share a document (or grant a more restricted access) with another actor, we must add a relationship between the +actor and the document. In order to make the relationship we require all of the following: +1) Target DocID: The docID of the document we want to make a relationship for. +2) Collection Name: The name of the collection that has the Target DocID. +3) Relation Name: The type of relation (name must be defined within the linked policy on collection). +4) Target Identity: The identity of the actor the relationship is being made with. +5) Requesting Identity: The identity of the actor that is making the request. + +Notes: + - ACP must be available (i.e. ACP can not be disabled). + - The target document must be registered with ACP already (policy & resource specified). + - The requesting identity MUST either be the owner OR the manager (manages the relation) of the resource. + - If the specified relation was not granted the minimum DPI permissions (read or write) within the policy, + and a relationship is formed, the subject/actor will still not be able to access (read or write) the resource. + - Learn more about [ACP & DPI Rules](/acp/README.md) + +Example: Let another actor (4d092126012ebaf56161716018a71630d99443d9d5217e9d8502bb5c5456f2c5) read a private document: + defradb client acp relationship add \ + --collection Users \ + --docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ + --relation reader \ + --actor did:key:z7r8os2G88XXBNBTLj3kFR5rzUJ4VAesbX7PgsA68ak9B5RYcXF5EZEmjRzzinZndPSSwujXb4XKHG6vmKEFG6ZfsfcQn \ + --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac + +Example: Let all actors read a private document: + defradb client acp relationship add \ + --collection Users \ + --docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ + --relation reader \ + --actor "*" \ + --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac + +Example: Creating a dummy relationship does nothing (from database perspective): + defradb client acp relationship add \ + -c Users \ + --docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ + -r dummy \ + -a did:key:z7r8os2G88XXBNBTLj3kFR5rzUJ4VAesbX7PgsA68ak9B5RYcXF5EZEmjRzzinZndPSSwujXb4XKHG6vmKEFG6ZfsfcQn \ + -i e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac + + +``` +defradb client acp relationship add [--docID] [-c --collection] [-r --relation] [-a --actor] [-i --identity] [flags] +``` + +### Options + +``` + -a, --actor string Actor to add relationship with + -c, --collection string Collection that has the resource and policy for object + --docID string Document Identifier (ObjectID) to make relationship for + -h, --help help for add + -r, --relation string Relation that needs to be set for the relationship +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client acp relationship](defradb_client_acp_relationship.md) - Interact with the acp relationship features of DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_acp_relationship_delete.md b/docs/defradb/references/cli/defradb_client_acp_relationship_delete.md new file mode 100644 index 0000000..8da5e6a --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_acp_relationship_delete.md @@ -0,0 +1,73 @@ +## defradb client acp relationship delete + +Delete relationship + +### Synopsis + +Delete relationship + +To revoke access to a document for an actor, we must delete the relationship between the +actor and the document. In order to delete the relationship we require all of the following: + +1) Target DocID: The docID of the document we want to delete a relationship for. +2) Collection Name: The name of the collection that has the Target DocID. +3) Relation Name: The type of relation (name must be defined within the linked policy on collection). +4) Target Identity: The identity of the actor the relationship is being deleted for. +5) Requesting Identity: The identity of the actor that is making the request. + +Notes: + - ACP must be available (i.e. ACP can not be disabled). + - The target document must be registered with ACP already (policy & resource specified). + - The requesting identity MUST either be the owner OR the manager (manages the relation) of the resource. + - If the relationship record was not found, then it will be a no-op. + - Learn more about [ACP & DPI Rules](/acp/README.md) + +Example: Let another actor (4d092126012ebaf56161716018a71630d99443d9d5217e9d8502bb5c5456f2c5) read a private document: + defradb client acp relationship delete \ + --collection Users \ + --docID bae-ff3ceb1c-b5c0-5e86-a024-dd1b16a4261c \ + --relation reader \ + --actor did:key:z7r8os2G88XXBNBTLj3kFR5rzUJ4VAesbX7PgsA68ak9B5RYcXF5EZEmjRzzinZndPSSwujXb4XKHG6vmKEFG6ZfsfcQn \ + --identity e3b722906ee4e56368f581cd8b18ab0f48af1ea53e635e3f7b8acd076676f6ac + + +``` +defradb client acp relationship delete [--docID] [-c --collection] [-r --relation] [-a --actor] [-i --identity] [flags] +``` + +### Options + +``` + -a, --actor string Actor to delete relationship for + -c, --collection string Collection that has the resource and policy for object + --docID string Document Identifier (ObjectID) to delete relationship for + -h, --help help for delete + -r, --relation string Relation that needs to be deleted within the relationship +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client acp relationship](defradb_client_acp_relationship.md) - Interact with the acp relationship features of DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_backup.md b/docs/defradb/references/cli/defradb_client_backup.md new file mode 100644 index 0000000..cf7b2d0 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_backup.md @@ -0,0 +1,43 @@ +## defradb client backup + +Interact with the backup utility + +### Synopsis + +Export to or Import from a backup file. +Currently only supports JSON format. + +### Options + +``` + -h, --help help for backup +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb client backup export](defradb_client_backup_export.md) - Export the database to a file +* [defradb client backup import](defradb_client_backup_import.md) - Import a JSON data file to the database + diff --git a/docs/defradb/references/cli/defradb_client_backup_export.md b/docs/defradb/references/cli/defradb_client_backup_export.md new file mode 100644 index 0000000..fe0887f --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_backup_export.md @@ -0,0 +1,55 @@ +## defradb client backup export + +Export the database to a file + +### Synopsis + +Export the database to a file. If a file exists at the location, it will be overwritten. + +If the --collection flag is provided, only the data for that collection will be exported. +Otherwise, all collections in the database will be exported. + +If the --pretty flag is provided, the JSON will be pretty printed. + +Example: export data for the 'Users' collection: + defradb client export --collection Users user_data.json + +``` +defradb client backup export [-c --collections | -p --pretty | -f --format] [flags] +``` + +### Options + +``` + -c, --collections strings List of collections + -f, --format string Define the output format. Supported formats: [json] (default "json") + -h, --help help for export + -p, --pretty Set the output JSON to be pretty printed +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client backup](defradb_client_backup.md) - Interact with the backup utility + diff --git a/docs/defradb/references/cli/defradb_client_backup_import.md b/docs/defradb/references/cli/defradb_client_backup_import.md new file mode 100644 index 0000000..7511f76 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_backup_import.md @@ -0,0 +1,47 @@ +## defradb client backup import + +Import a JSON data file to the database + +### Synopsis + +Import a JSON data file to the database. + +Example: import data to the database: + defradb client import user_data.json + +``` +defradb client backup import [flags] +``` + +### Options + +``` + -h, --help help for import +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client backup](defradb_client_backup.md) - Interact with the backup utility + diff --git a/docs/defradb/references/cli/defradb_client_collection.md b/docs/defradb/references/cli/defradb_client_collection.md new file mode 100644 index 0000000..d4ea8c3 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_collection.md @@ -0,0 +1,51 @@ +## defradb client collection + +Interact with a collection. + +### Synopsis + +Create, read, update, and delete documents within a collection. + +### Options + +``` + --get-inactive Get inactive collections as well as active + -h, --help help for collection + -i, --identity string Hex formatted private key used to authenticate with ACP + --name string Collection name + --schema string Collection schema Root + --tx uint Transaction ID + --version string Collection version ID +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb client collection create](defradb_client_collection_create.md) - Create a new document. +* [defradb client collection delete](defradb_client_collection_delete.md) - Delete documents by docID or filter. +* [defradb client collection describe](defradb_client_collection_describe.md) - View collection description. +* [defradb client collection docIDs](defradb_client_collection_docIDs.md) - List all document IDs (docIDs). +* [defradb client collection get](defradb_client_collection_get.md) - View document fields. +* [defradb client collection patch](defradb_client_collection_patch.md) - Patch existing collection descriptions +* [defradb client collection update](defradb_client_collection_update.md) - Update documents by docID or filter. + diff --git a/docs/defradb/references/cli/defradb_client_collection_create.md b/docs/defradb/references/cli/defradb_client_collection_create.md new file mode 100644 index 0000000..8c88041 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_collection_create.md @@ -0,0 +1,83 @@ +## defradb client collection create + +Create a new document. + +### Synopsis + +Create a new document. + +Options: + -i, --identity + Marks the document as private and set the identity as the owner. The access to the document + and permissions are controlled by ACP (Access Control Policy). + + -e, --encrypt + Encrypt flag specified if the document needs to be encrypted. If set, DefraDB will generate a + symmetric key for encryption using AES-GCM. + + --encrypt-fields + Comma-separated list of fields to encrypt. If set, DefraDB will encrypt only the specified fields + and for every field in the list it will generate a symmetric key for encryption using AES-GCM. + If combined with '--encrypt' flag, all the fields in the document not listed in '--encrypt-fields' + will be encrypted with the same key. + +Example: create from string: + defradb client collection create --name User '{ "name": "Bob" }' + +Example: create from string, with identity: + defradb client collection create --name User '{ "name": "Bob" }' \ + -i 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f + +Example: create multiple from string: + defradb client collection create --name User '[{ "name": "Alice" }, { "name": "Bob" }]' + +Example: create from file: + defradb client collection create --name User -f document.json + +Example: create from stdin: + cat document.json | defradb client collection create --name User - + + +``` +defradb client collection create [-i --identity] [-e --encrypt] [--encrypt-fields] [flags] +``` + +### Options + +``` + -e, --encrypt Flag to enable encryption of the document + --encrypt-fields strings Comma-separated list of fields to encrypt + -f, --file string File containing document(s) + -h, --help help for create +``` + +### Options inherited from parent commands + +``` + --get-inactive Get inactive collections as well as active + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --name string Collection name + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --schema string Collection schema Root + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") + --version string Collection version ID +``` + +### SEE ALSO + +* [defradb client collection](defradb_client_collection.md) - Interact with a collection. + diff --git a/docs/defradb/references/cli/defradb_client_collection_delete.md b/docs/defradb/references/cli/defradb_client_collection_delete.md new file mode 100644 index 0000000..d23c480 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_collection_delete.md @@ -0,0 +1,61 @@ +## defradb client collection delete + +Delete documents by docID or filter. + +### Synopsis + +Delete documents by docID or filter and lists the number of documents deleted. + +Example: delete by docID: + defradb client collection delete --name User --docID bae-123 + +Example: delete by docID with identity: + defradb client collection delete --name User --docID bae-123 \ + -i 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f + +Example: delete by filter: + defradb client collection delete --name User --filter '{ "_gte": { "points": 100 } }' + + +``` +defradb client collection delete [-i --identity] [--filter --docID ] [flags] +``` + +### Options + +``` + --docID string Document ID + --filter string Document filter + -h, --help help for delete +``` + +### Options inherited from parent commands + +``` + --get-inactive Get inactive collections as well as active + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --name string Collection name + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --schema string Collection schema Root + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") + --version string Collection version ID +``` + +### SEE ALSO + +* [defradb client collection](defradb_client_collection.md) - Interact with a collection. + diff --git a/docs/defradb/references/cli/defradb_client_collection_describe.md b/docs/defradb/references/cli/defradb_client_collection_describe.md new file mode 100644 index 0000000..3de635d --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_collection_describe.md @@ -0,0 +1,61 @@ +## defradb client collection describe + +View collection description. + +### Synopsis + +Introspect collection types. + +Example: view all collections + defradb client collection describe + +Example: view collection by name + defradb client collection describe --name User + +Example: view collection by schema root id + defradb client collection describe --schema bae123 + +Example: view collection by version id. This will also return inactive collections + defradb client collection describe --version bae123 + + +``` +defradb client collection describe [flags] +``` + +### Options + +``` + --get-inactive Get inactive collections as well as active + -h, --help help for describe + --name string Collection name + --schema string Collection schema Root + --version string Collection version ID +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client collection](defradb_client_collection.md) - Interact with a collection. + diff --git a/docs/defradb/references/cli/defradb_client_collection_docIDs.md b/docs/defradb/references/cli/defradb_client_collection_docIDs.md new file mode 100644 index 0000000..4e57523 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_collection_docIDs.md @@ -0,0 +1,55 @@ +## defradb client collection docIDs + +List all document IDs (docIDs). + +### Synopsis + +List all document IDs (docIDs). + +Example: list all docID(s): + defradb client collection docIDs --name User + +Example: list all docID(s), with an identity: + defradb client collection docIDs -i 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f --name User + + +``` +defradb client collection docIDs [-i --identity] [flags] +``` + +### Options + +``` + -h, --help help for docIDs +``` + +### Options inherited from parent commands + +``` + --get-inactive Get inactive collections as well as active + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --name string Collection name + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --schema string Collection schema Root + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") + --version string Collection version ID +``` + +### SEE ALSO + +* [defradb client collection](defradb_client_collection.md) - Interact with a collection. + diff --git a/docs/defradb/references/cli/defradb_client_collection_get.md b/docs/defradb/references/cli/defradb_client_collection_get.md new file mode 100644 index 0000000..8b5b530 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_collection_get.md @@ -0,0 +1,56 @@ +## defradb client collection get + +View document fields. + +### Synopsis + +View document fields. + +Example: + defradb client collection get --name User bae-123 + +Example to get a private document we must use an identity: + defradb client collection get -i 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f --name User bae-123 + + +``` +defradb client collection get [-i --identity] [--show-deleted] [flags] +``` + +### Options + +``` + -h, --help help for get + --show-deleted Show deleted documents +``` + +### Options inherited from parent commands + +``` + --get-inactive Get inactive collections as well as active + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --name string Collection name + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --schema string Collection schema Root + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") + --version string Collection version ID +``` + +### SEE ALSO + +* [defradb client collection](defradb_client_collection.md) - Interact with a collection. + diff --git a/docs/defradb/references/cli/defradb_client_collection_patch.md b/docs/defradb/references/cli/defradb_client_collection_patch.md new file mode 100644 index 0000000..ea2e581 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_collection_patch.md @@ -0,0 +1,62 @@ +## defradb client collection patch + +Patch existing collection descriptions + +### Synopsis + +Patch existing collection descriptions. + +Uses JSON Patch to modify collection descriptions. + +Example: patch from an argument string: + defradb client collection patch '[{ "op": "add", "path": "...", "value": {...} }]' + +Example: patch from file: + defradb client collection patch -p patch.json + +Example: patch from stdin: + cat patch.json | defradb client collection patch - + +To learn more about the DefraDB GraphQL Schema Language, refer to https://docs.source.network. + +``` +defradb client collection patch [patch] [flags] +``` + +### Options + +``` + -h, --help help for patch + -p, --patch-file string File to load a patch from +``` + +### Options inherited from parent commands + +``` + --get-inactive Get inactive collections as well as active + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --name string Collection name + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --schema string Collection schema Root + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") + --version string Collection version ID +``` + +### SEE ALSO + +* [defradb client collection](defradb_client_collection.md) - Interact with a collection. + diff --git a/docs/defradb/references/cli/defradb_client_collection_update.md b/docs/defradb/references/cli/defradb_client_collection_update.md new file mode 100644 index 0000000..f21bba3 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_collection_update.md @@ -0,0 +1,67 @@ +## defradb client collection update + +Update documents by docID or filter. + +### Synopsis + +Update documents by docID or filter. + +Example: update from string: + defradb client collection update --name User --docID bae-123 '{ "name": "Bob" }' + +Example: update by filter: + defradb client collection update --name User \ + --filter '{ "_gte": { "points": 100 } }' --updater '{ "verified": true }' + +Example: update by docID: + defradb client collection update --name User \ + --docID bae-123 --updater '{ "verified": true }' + +Example: update private docID, with identity: + defradb client collection update -i 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f --name User \ + --docID bae-123 --updater '{ "verified": true }' + + +``` +defradb client collection update [-i --identity] [--filter --docID --updater ] [flags] +``` + +### Options + +``` + --docID string Document ID + --filter string Document filter + -h, --help help for update + --updater string Document updater +``` + +### Options inherited from parent commands + +``` + --get-inactive Get inactive collections as well as active + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --name string Collection name + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --schema string Collection schema Root + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") + --version string Collection version ID +``` + +### SEE ALSO + +* [defradb client collection](defradb_client_collection.md) - Interact with a collection. + diff --git a/docs/defradb/references/cli/defradb_client_dump.md b/docs/defradb/references/cli/defradb_client_dump.md new file mode 100644 index 0000000..ebc3eec --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_dump.md @@ -0,0 +1,40 @@ +## defradb client dump + +Dump the contents of DefraDB node-side + +``` +defradb client dump [flags] +``` + +### Options + +``` + -h, --help help for dump +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node + diff --git a/docs/defradb/references/cli/defradb_client_index.md b/docs/defradb/references/cli/defradb_client_index.md new file mode 100644 index 0000000..7143286 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_index.md @@ -0,0 +1,43 @@ +## defradb client index + +Manage collections' indexes of a running DefraDB instance + +### Synopsis + +Manage (create, drop, or list) collection indexes on a DefraDB node. + +### Options + +``` + -h, --help help for index +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb client index create](defradb_client_index_create.md) - Creates a secondary index on a collection's field(s) +* [defradb client index drop](defradb_client_index_drop.md) - Drop a collection's secondary index +* [defradb client index list](defradb_client_index_list.md) - Shows the list indexes in the database or for a specific collection + diff --git a/docs/defradb/references/cli/defradb_client_index_create.md b/docs/defradb/references/cli/defradb_client_index_create.md new file mode 100644 index 0000000..268cd9e --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_index_create.md @@ -0,0 +1,62 @@ +## defradb client index create + +Creates a secondary index on a collection's field(s) + +### Synopsis + +Creates a secondary index on a collection's field(s). + +The --name flag is optional. If not provided, a name will be generated automatically. +The --unique flag is optional. If provided, the index will be unique. +If no order is specified for the field, the default value will be "ASC" + +Example: create an index for 'Users' collection on 'name' field: + defradb client index create --collection Users --fields name + +Example: create a named index for 'Users' collection on 'name' field: + defradb client index create --collection Users --fields name --name UsersByName + +Example: create a unique index for 'Users' collection on 'name' in ascending order, and 'age' in descending order: + defradb client index create --collection Users --fields name:ASC,age:DESC --unique + + +``` +defradb client index create -c --collection --fields [-n --name ] [--unique] [flags] +``` + +### Options + +``` + -c, --collection string Collection name + --fields strings Fields to index + -h, --help help for create + -n, --name string Index name + -u, --unique Make the index unique +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client index](defradb_client_index.md) - Manage collections' indexes of a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_index_drop.md b/docs/defradb/references/cli/defradb_client_index_drop.md new file mode 100644 index 0000000..081f5e2 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_index_drop.md @@ -0,0 +1,49 @@ +## defradb client index drop + +Drop a collection's secondary index + +### Synopsis + +Drop a collection's secondary index. + +Example: drop the index 'UsersByName' for 'Users' collection: + defradb client index create --collection Users --name UsersByName + +``` +defradb client index drop -c --collection -n --name [flags] +``` + +### Options + +``` + -c, --collection string Collection name + -h, --help help for drop + -n, --name string Index name +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client index](defradb_client_index.md) - Manage collections' indexes of a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_index_list.md b/docs/defradb/references/cli/defradb_client_index_list.md new file mode 100644 index 0000000..c5cf211 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_index_list.md @@ -0,0 +1,51 @@ +## defradb client index list + +Shows the list indexes in the database or for a specific collection + +### Synopsis + +Shows the list indexes in the database or for a specific collection. + +If the --collection flag is provided, only the indexes for that collection will be shown. +Otherwise, all indexes in the database will be shown. + +Example: show all index for 'Users' collection: + defradb client index list --collection Users + +``` +defradb client index list [-c --collection ] [flags] +``` + +### Options + +``` + -c, --collection string Collection name + -h, --help help for list +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client index](defradb_client_index.md) - Manage collections' indexes of a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_node-identity.md b/docs/defradb/references/cli/defradb_client_node-identity.md new file mode 100644 index 0000000..907a959 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_node-identity.md @@ -0,0 +1,55 @@ +## defradb client node-identity + +Get the public information about the node's identity + +### Synopsis + +Get the public information about the node's identity. + +Node uses the identity to be able to exchange encryption keys with other nodes. + +A public identity contains: +- A compressed 33-byte secp256k1 public key in HEX format. +- A "did:key" generated from the public key. + +Example to get the identity of the node: + defradb client node-identity + + + +``` +defradb client node-identity [flags] +``` + +### Options + +``` + -h, --help help for node-identity +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node + diff --git a/docs/defradb/references/cli/defradb_client_p2p.md b/docs/defradb/references/cli/defradb_client_p2p.md new file mode 100644 index 0000000..d998508 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p.md @@ -0,0 +1,43 @@ +## defradb client p2p + +Interact with the DefraDB P2P system + +### Synopsis + +Interact with the DefraDB P2P system + +### Options + +``` + -h, --help help for p2p +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb client p2p collection](defradb_client_p2p_collection.md) - Configure the P2P collection system +* [defradb client p2p info](defradb_client_p2p_info.md) - Get peer info from a DefraDB node +* [defradb client p2p replicator](defradb_client_p2p_replicator.md) - Configure the replicator system + diff --git a/docs/defradb/references/cli/defradb_client_p2p_collection.md b/docs/defradb/references/cli/defradb_client_p2p_collection.md new file mode 100644 index 0000000..bc1f8c6 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_collection.md @@ -0,0 +1,44 @@ +## defradb client p2p collection + +Configure the P2P collection system + +### Synopsis + +Add, delete, or get the list of P2P collections. +The selected collections synchronize their events on the pubsub network. + +### Options + +``` + -h, --help help for collection +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p](defradb_client_p2p.md) - Interact with the DefraDB P2P system +* [defradb client p2p collection add](defradb_client_p2p_collection_add.md) - Add P2P collections +* [defradb client p2p collection getall](defradb_client_p2p_collection_getall.md) - Get all P2P collections +* [defradb client p2p collection remove](defradb_client_p2p_collection_remove.md) - Remove P2P collections + diff --git a/docs/defradb/references/cli/defradb_client_p2p_collection_add.md b/docs/defradb/references/cli/defradb_client_p2p_collection_add.md new file mode 100644 index 0000000..836c066 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_collection_add.md @@ -0,0 +1,52 @@ +## defradb client p2p collection add + +Add P2P collections + +### Synopsis + +Add P2P collections to the synchronized pubsub topics. +The collections are synchronized between nodes of a pubsub network. + +Example: add single collection + defradb client p2p collection add bae123 + +Example: add multiple collections + defradb client p2p collection add bae123,bae456 + + +``` +defradb client p2p collection add [collectionIDs] [flags] +``` + +### Options + +``` + -h, --help help for add +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p collection](defradb_client_p2p_collection.md) - Configure the P2P collection system + diff --git a/docs/defradb/references/cli/defradb_client_p2p_collection_getall.md b/docs/defradb/references/cli/defradb_client_p2p_collection_getall.md new file mode 100644 index 0000000..3df290b --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_collection_getall.md @@ -0,0 +1,45 @@ +## defradb client p2p collection getall + +Get all P2P collections + +### Synopsis + +Get all P2P collections in the pubsub topics. +This is the list of collections of the node that are synchronized on the pubsub network. + +``` +defradb client p2p collection getall [flags] +``` + +### Options + +``` + -h, --help help for getall +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p collection](defradb_client_p2p_collection.md) - Configure the P2P collection system + diff --git a/docs/defradb/references/cli/defradb_client_p2p_collection_remove.md b/docs/defradb/references/cli/defradb_client_p2p_collection_remove.md new file mode 100644 index 0000000..c3739df --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_collection_remove.md @@ -0,0 +1,52 @@ +## defradb client p2p collection remove + +Remove P2P collections + +### Synopsis + +Remove P2P collections from the followed pubsub topics. +The removed collections will no longer be synchronized between nodes. + +Example: remove single collection + defradb client p2p collection remove bae123 + +Example: remove multiple collections + defradb client p2p collection remove bae123,bae456 + + +``` +defradb client p2p collection remove [collectionIDs] [flags] +``` + +### Options + +``` + -h, --help help for remove +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p collection](defradb_client_p2p_collection.md) - Configure the P2P collection system + diff --git a/docs/defradb/references/cli/defradb_client_p2p_info.md b/docs/defradb/references/cli/defradb_client_p2p_info.md new file mode 100644 index 0000000..cf78285 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_info.md @@ -0,0 +1,44 @@ +## defradb client p2p info + +Get peer info from a DefraDB node + +### Synopsis + +Get peer info from a DefraDB node + +``` +defradb client p2p info [flags] +``` + +### Options + +``` + -h, --help help for info +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p](defradb_client_p2p.md) - Interact with the DefraDB P2P system + diff --git a/docs/defradb/references/cli/defradb_client_p2p_replicator.md b/docs/defradb/references/cli/defradb_client_p2p_replicator.md new file mode 100644 index 0000000..3c1efbc --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_replicator.md @@ -0,0 +1,44 @@ +## defradb client p2p replicator + +Configure the replicator system + +### Synopsis + +Configure the replicator system. Add, delete, or get the list of persisted replicators. +A replicator replicates one or all collection(s) from one node to another. + +### Options + +``` + -h, --help help for replicator +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p](defradb_client_p2p.md) - Interact with the DefraDB P2P system +* [defradb client p2p replicator delete](defradb_client_p2p_replicator_delete.md) - Delete replicator(s) and stop synchronization +* [defradb client p2p replicator getall](defradb_client_p2p_replicator_getall.md) - Get all replicators +* [defradb client p2p replicator set](defradb_client_p2p_replicator_set.md) - Add replicator(s) and start synchronization + diff --git a/docs/defradb/references/cli/defradb_client_p2p_replicator_delete.md b/docs/defradb/references/cli/defradb_client_p2p_replicator_delete.md new file mode 100644 index 0000000..626aafe --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_replicator_delete.md @@ -0,0 +1,50 @@ +## defradb client p2p replicator delete + +Delete replicator(s) and stop synchronization + +### Synopsis + +Delete replicator(s) and stop synchronization. +A replicator synchronizes one or all collection(s) from this node to another. + +Example: + defradb client p2p replicator delete -c Users '{"ID": "12D3", "Addrs": ["/ip4/0.0.0.0/tcp/9171"]}' + + +``` +defradb client p2p replicator delete [-c, --collection] [flags] +``` + +### Options + +``` + -c, --collection strings Collection(s) to stop replicating + -h, --help help for delete +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p replicator](defradb_client_p2p_replicator.md) - Configure the replicator system + diff --git a/docs/defradb/references/cli/defradb_client_p2p_replicator_getall.md b/docs/defradb/references/cli/defradb_client_p2p_replicator_getall.md new file mode 100644 index 0000000..c9f9142 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_replicator_getall.md @@ -0,0 +1,49 @@ +## defradb client p2p replicator getall + +Get all replicators + +### Synopsis + +Get all the replicators active in the P2P data sync system. +A replicator synchronizes one or all collection(s) from this node to another. + +Example: + defradb client p2p replicator getall + + +``` +defradb client p2p replicator getall [flags] +``` + +### Options + +``` + -h, --help help for getall +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p replicator](defradb_client_p2p_replicator.md) - Configure the replicator system + diff --git a/docs/defradb/references/cli/defradb_client_p2p_replicator_set.md b/docs/defradb/references/cli/defradb_client_p2p_replicator_set.md new file mode 100644 index 0000000..75d6efe --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_p2p_replicator_set.md @@ -0,0 +1,50 @@ +## defradb client p2p replicator set + +Add replicator(s) and start synchronization + +### Synopsis + +Add replicator(s) and start synchronization. +A replicator synchronizes one or all collection(s) from this node to another. + +Example: + defradb client p2p replicator set -c Users '{"ID": "12D3", "Addrs": ["/ip4/0.0.0.0/tcp/9171"]}' + + +``` +defradb client p2p replicator set [-c, --collection] [flags] +``` + +### Options + +``` + -c, --collection strings Collection(s) to replicate + -h, --help help for set +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client p2p replicator](defradb_client_p2p_replicator.md) - Configure the replicator system + diff --git a/docs/defradb/references/cli/defradb_client_purge.md b/docs/defradb/references/cli/defradb_client_purge.md new file mode 100644 index 0000000..82adc92 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_purge.md @@ -0,0 +1,46 @@ +## defradb client purge + +Delete all persisted data and restart + +### Synopsis + +Delete all persisted data and restart. +WARNING this operation cannot be reversed. + +``` +defradb client purge [flags] +``` + +### Options + +``` + -f, --force Must be set for the operation to run + -h, --help help for purge +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node + diff --git a/docs/defradb/references/cli/defradb_client_query.md b/docs/defradb/references/cli/defradb_client_query.md new file mode 100644 index 0000000..abaea09 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_query.md @@ -0,0 +1,64 @@ +## defradb client query + +Send a DefraDB GraphQL query request + +### Synopsis + +Send a DefraDB GraphQL query request to the database. + +A query request can be sent as a single argument. Example command: + defradb client query 'query { ... }' + +Do a query request from a file by using the '-f' flag. Example command: + defradb client query -f request.graphql + +Do a query request from a file and with an identity. Example command: + defradb client query -i 028d53f37a19afb9a0dbc5b4be30c65731479ee8cfa0c9bc8f8bf198cc3c075f -f request.graphql + +Or it can be sent via stdin by using the '-' special syntax. Example command: + cat request.graphql | defradb client query - + +A GraphQL client such as GraphiQL (https://github.com/graphql/graphiql) can be used to interact +with the database more conveniently. + +To learn more about the DefraDB GraphQL Query Language, refer to https://docs.source.network. + +``` +defradb client query [-i --identity] [request] [flags] +``` + +### Options + +``` + -f, --file string File containing the query request + -h, --help help for query + -o, --operation string Name of the operation to execute in the query + -v, --variables string JSON encoded variables to use in the query +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node + diff --git a/docs/defradb/references/cli/defradb_client_schema.md b/docs/defradb/references/cli/defradb_client_schema.md new file mode 100644 index 0000000..51e3ae0 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema.md @@ -0,0 +1,45 @@ +## defradb client schema + +Interact with the schema system of a DefraDB node + +### Synopsis + +Make changes, updates, or look for existing schema types. + +### Options + +``` + -h, --help help for schema +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb client schema add](defradb_client_schema_add.md) - Add new schema +* [defradb client schema describe](defradb_client_schema_describe.md) - View schema descriptions. +* [defradb client schema migration](defradb_client_schema_migration.md) - Interact with the schema migration system of a running DefraDB instance +* [defradb client schema patch](defradb_client_schema_patch.md) - Patch an existing schema type +* [defradb client schema set-active](defradb_client_schema_set-active.md) - Set the active collection version + diff --git a/docs/defradb/references/cli/defradb_client_schema_add.md b/docs/defradb/references/cli/defradb_client_schema_add.md new file mode 100644 index 0000000..ecff042 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_add.md @@ -0,0 +1,61 @@ +## defradb client schema add + +Add new schema + +### Synopsis + +Add new schema. + +Schema Object with a '@policy(id:".." resource: "..")' linked will only be accepted if: + - ACP is available (i.e. ACP is not disabled). + - The specified resource adheres to the Document Access Control DPI Rules. + - Learn more about [ACP & DPI Rules](/acp/README.md) + +Example: add from an argument string: + defradb client schema add 'type Foo { ... }' + +Example: add from file: + defradb client schema add -f schema.graphql + +Example: add from stdin: + cat schema.graphql | defradb client schema add - + +Learn more about the DefraDB GraphQL Schema Language on https://docs.source.network. + +``` +defradb client schema add [schema] [flags] +``` + +### Options + +``` + -f, --file string File to load a schema from + -h, --help help for add +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a DefraDB node + diff --git a/docs/defradb/references/cli/defradb_client_schema_describe.md b/docs/defradb/references/cli/defradb_client_schema_describe.md new file mode 100644 index 0000000..a3dd96f --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_describe.md @@ -0,0 +1,60 @@ +## defradb client schema describe + +View schema descriptions. + +### Synopsis + +Introspect schema types. + +Example: view all schemas + defradb client schema describe + +Example: view schemas by name + defradb client schema describe --name User + +Example: view schemas by root + defradb client schema describe --root bae123 + +Example: view a single schema by version id + defradb client schema describe --version bae123 + + +``` +defradb client schema describe [flags] +``` + +### Options + +``` + -h, --help help for describe + --name string Schema name + --root string Schema root + --version string Schema Version ID +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a DefraDB node + diff --git a/docs/defradb/references/cli/defradb_client_schema_migration.md b/docs/defradb/references/cli/defradb_client_schema_migration.md new file mode 100644 index 0000000..76be24e --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_migration.md @@ -0,0 +1,45 @@ +## defradb client schema migration + +Interact with the schema migration system of a running DefraDB instance + +### Synopsis + +Make set or look for existing schema migrations on a DefraDB node. + +### Options + +``` + -h, --help help for migration +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a DefraDB node +* [defradb client schema migration down](defradb_client_schema_migration_down.md) - Reverses the migration to the specified collection version. +* [defradb client schema migration reload](defradb_client_schema_migration_reload.md) - Reload the schema migrations within DefraDB +* [defradb client schema migration set](defradb_client_schema_migration_set.md) - Set a schema migration within DefraDB +* [defradb client schema migration set-registry](defradb_client_schema_migration_set-registry.md) - Set a schema migration within the DefraDB LensRegistry +* [defradb client schema migration up](defradb_client_schema_migration_up.md) - Applies the migration to the specified collection version. + diff --git a/docs/defradb/references/cli/defradb_client_schema_migration_down.md b/docs/defradb/references/cli/defradb_client_schema_migration_down.md new file mode 100644 index 0000000..2ec8a26 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_migration_down.md @@ -0,0 +1,57 @@ +## defradb client schema migration down + +Reverses the migration to the specified collection version. + +### Synopsis + +Reverses the migration to the specified collection version. +Documents is a list of documents to reverse the migration from. + +Example: migrate from string + defradb client schema migration down --collection 2 '[{"name": "Bob"}]' + +Example: migrate from file + defradb client schema migration down --collection 2 -f documents.json + +Example: migrate from stdin + cat documents.json | defradb client schema migration down --collection 2 - + + +``` +defradb client schema migration down --collection [flags] +``` + +### Options + +``` + --collection uint32 Collection id + -f, --file string File containing document(s) + -h, --help help for down +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema migration](defradb_client_schema_migration.md) - Interact with the schema migration system of a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_schema_migration_reload.md b/docs/defradb/references/cli/defradb_client_schema_migration_reload.md new file mode 100644 index 0000000..07011eb --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_migration_reload.md @@ -0,0 +1,44 @@ +## defradb client schema migration reload + +Reload the schema migrations within DefraDB + +### Synopsis + +Reload the schema migrations within DefraDB + +``` +defradb client schema migration reload [flags] +``` + +### Options + +``` + -h, --help help for reload +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema migration](defradb_client_schema_migration.md) - Interact with the schema migration system of a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_schema_migration_set-registry.md b/docs/defradb/references/cli/defradb_client_schema_migration_set-registry.md new file mode 100644 index 0000000..a62a1a5 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_migration_set-registry.md @@ -0,0 +1,50 @@ +## defradb client schema migration set-registry + +Set a schema migration within the DefraDB LensRegistry + +### Synopsis + +Set a migration to a collection within the LensRegistry of the local DefraDB node. +Does not persist the migration after restart. + +Example: set from an argument string: + defradb client schema migration set-registry 2 '{"lenses": [...' + +Learn more about the DefraDB GraphQL Schema Language on https://docs.source.network. + +``` +defradb client schema migration set-registry [collectionID] [cfg] [flags] +``` + +### Options + +``` + -h, --help help for set-registry +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema migration](defradb_client_schema_migration.md) - Interact with the schema migration system of a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_schema_migration_set.md b/docs/defradb/references/cli/defradb_client_schema_migration_set.md new file mode 100644 index 0000000..9543136 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_migration_set.md @@ -0,0 +1,57 @@ +## defradb client schema migration set + +Set a schema migration within DefraDB + +### Synopsis + +Set a migration from a source schema version to a destination schema version for +all collections that are on the given source schema version within the local DefraDB node. + +Example: set from an argument string: + defradb client schema migration set bae123 bae456 '{"lenses": [...' + +Example: set from file: + defradb client schema migration set bae123 bae456 -f schema_migration.lens + +Example: add from stdin: + cat schema_migration.lens | defradb client schema migration set bae123 bae456 - + +Learn more about the DefraDB GraphQL Schema Language on https://docs.source.network. + +``` +defradb client schema migration set [src] [dst] [cfg] [flags] +``` + +### Options + +``` + -f, --file string Lens configuration file + -h, --help help for set +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema migration](defradb_client_schema_migration.md) - Interact with the schema migration system of a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_schema_migration_up.md b/docs/defradb/references/cli/defradb_client_schema_migration_up.md new file mode 100644 index 0000000..1b59edb --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_migration_up.md @@ -0,0 +1,57 @@ +## defradb client schema migration up + +Applies the migration to the specified collection version. + +### Synopsis + +Applies the migration to the specified collection version. +Documents is a list of documents to apply the migration to. + +Example: migrate from string + defradb client schema migration up --collection 2 '[{"name": "Bob"}]' + +Example: migrate from file + defradb client schema migration up --collection 2 -f documents.json + +Example: migrate from stdin + cat documents.json | defradb client schema migration up --collection 2 - + + +``` +defradb client schema migration up --collection [flags] +``` + +### Options + +``` + --collection uint32 Collection id + -f, --file string File containing document(s) + -h, --help help for up +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema migration](defradb_client_schema_migration.md) - Interact with the schema migration system of a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_schema_patch.md b/docs/defradb/references/cli/defradb_client_schema_patch.md new file mode 100644 index 0000000..e2b58fe --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_patch.md @@ -0,0 +1,60 @@ +## defradb client schema patch + +Patch an existing schema type + +### Synopsis + +Patch an existing schema. + +Uses JSON Patch to modify schema types. + +Example: patch from an argument string: + defradb client schema patch '[{ "op": "add", "path": "...", "value": {...} }]' '{"lenses": [...' + +Example: patch from file: + defradb client schema patch -p patch.json + +Example: patch from stdin: + cat patch.json | defradb client schema patch - + +To learn more about the DefraDB GraphQL Schema Language, refer to https://docs.source.network. + +``` +defradb client schema patch [schema] [migration] [flags] +``` + +### Options + +``` + -h, --help help for patch + -t, --lens-file string File to load a lens config from + -p, --patch-file string File to load a patch from + --set-active Set the active schema version for all collections using the root schem +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a DefraDB node + diff --git a/docs/defradb/references/cli/defradb_client_schema_set-active.md b/docs/defradb/references/cli/defradb_client_schema_set-active.md new file mode 100644 index 0000000..7b637da --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_schema_set-active.md @@ -0,0 +1,45 @@ +## defradb client schema set-active + +Set the active collection version + +### Synopsis + +Activates all collection versions with the given schema version, and deactivates all +those without it (if they share the same schema root). + +``` +defradb client schema set-active [versionID] [flags] +``` + +### Options + +``` + -h, --help help for set-active +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a DefraDB node + diff --git a/docs/defradb/references/cli/defradb_client_tx.md b/docs/defradb/references/cli/defradb_client_tx.md new file mode 100644 index 0000000..01353b8 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_tx.md @@ -0,0 +1,43 @@ +## defradb client tx + +Create, commit, and discard DefraDB transactions + +### Synopsis + +Create, commit, and discard DefraDB transactions + +### Options + +``` + -h, --help help for tx +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb client tx commit](defradb_client_tx_commit.md) - Commit a DefraDB transaction. +* [defradb client tx create](defradb_client_tx_create.md) - Create a new DefraDB transaction. +* [defradb client tx discard](defradb_client_tx_discard.md) - Discard a DefraDB transaction. + diff --git a/docs/defradb/references/cli/defradb_client_tx_commit.md b/docs/defradb/references/cli/defradb_client_tx_commit.md new file mode 100644 index 0000000..557b9f0 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_tx_commit.md @@ -0,0 +1,44 @@ +## defradb client tx commit + +Commit a DefraDB transaction. + +### Synopsis + +Commit a DefraDB transaction. + +``` +defradb client tx commit [id] [flags] +``` + +### Options + +``` + -h, --help help for commit +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client tx](defradb_client_tx.md) - Create, commit, and discard DefraDB transactions + diff --git a/docs/defradb/references/cli/defradb_client_tx_create.md b/docs/defradb/references/cli/defradb_client_tx_create.md new file mode 100644 index 0000000..f174a09 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_tx_create.md @@ -0,0 +1,46 @@ +## defradb client tx create + +Create a new DefraDB transaction. + +### Synopsis + +Create a new DefraDB transaction. + +``` +defradb client tx create [flags] +``` + +### Options + +``` + --concurrent Transaction is concurrent + -h, --help help for create + --read-only Transaction is read only +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client tx](defradb_client_tx.md) - Create, commit, and discard DefraDB transactions + diff --git a/docs/defradb/references/cli/defradb_client_tx_discard.md b/docs/defradb/references/cli/defradb_client_tx_discard.md new file mode 100644 index 0000000..671d4f2 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_tx_discard.md @@ -0,0 +1,44 @@ +## defradb client tx discard + +Discard a DefraDB transaction. + +### Synopsis + +Discard a DefraDB transaction. + +``` +defradb client tx discard [id] [flags] +``` + +### Options + +``` + -h, --help help for discard +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client tx](defradb_client_tx.md) - Create, commit, and discard DefraDB transactions + diff --git a/docs/defradb/references/cli/defradb_client_view.md b/docs/defradb/references/cli/defradb_client_view.md new file mode 100644 index 0000000..bf21e03 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_view.md @@ -0,0 +1,42 @@ +## defradb client view + +Manage views within a running DefraDB instance + +### Synopsis + +Manage (add) views withing a running DefraDB instance + +### Options + +``` + -h, --help help for view +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client](defradb_client.md) - Interact with a DefraDB node +* [defradb client view add](defradb_client_view_add.md) - Add new view +* [defradb client view refresh](defradb_client_view_refresh.md) - Refresh views. + diff --git a/docs/defradb/references/cli/defradb_client_view_add.md b/docs/defradb/references/cli/defradb_client_view_add.md new file mode 100644 index 0000000..c5073d7 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_view_add.md @@ -0,0 +1,50 @@ +## defradb client view add + +Add new view + +### Synopsis + +Add new database view. + +Example: add from an argument string: + defradb client view add 'Foo { name, ...}' 'type Foo { ... }' '{"lenses": [...' + +Learn more about the DefraDB GraphQL Schema Language on https://docs.source.network. + +``` +defradb client view add [query] [sdl] [transform] [flags] +``` + +### Options + +``` + -f, --file string Lens configuration file + -h, --help help for add +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client view](defradb_client_view.md) - Manage views within a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_client_view_refresh.md b/docs/defradb/references/cli/defradb_client_view_refresh.md new file mode 100644 index 0000000..28e9151 --- /dev/null +++ b/docs/defradb/references/cli/defradb_client_view_refresh.md @@ -0,0 +1,66 @@ +## defradb client view refresh + +Refresh views. + +### Synopsis + +Refresh views, executing the underlying query and LensVm transforms and +persisting the results. + +View is refreshed as the current user, meaning the cached items will reflect that user's +permissions. Subsequent query requests to the view, regardless of user, will receive +items from that cache. + +Example: refresh all views + defradb client view refresh + +Example: refresh views by name + defradb client view refresh --name UserView + +Example: refresh views by schema root id + defradb client view refresh --schema bae123 + +Example: refresh views by version id. This will also return inactive views + defradb client view refresh --version bae123 + + +``` +defradb client view refresh [flags] +``` + +### Options + +``` + --get-inactive Get inactive views as well as active + -h, --help help for refresh + --name string View name + --schema string View schema Root + --version string View version ID +``` + +### Options inherited from parent commands + +``` + -i, --identity string Hex formatted private key used to authenticate with ACP + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --tx uint Transaction ID + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb client view](defradb_client_view.md) - Manage views within a running DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_identity.md b/docs/defradb/references/cli/defradb_identity.md new file mode 100644 index 0000000..cb0d0a6 --- /dev/null +++ b/docs/defradb/references/cli/defradb_identity.md @@ -0,0 +1,39 @@ +## defradb identity + +Interact with identity features of DefraDB instance + +### Synopsis + +Interact with identity features of DefraDB instance + +### Options + +``` + -h, --help help for identity +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb](defradb.md) - DefraDB Edge Database +* [defradb identity new](defradb_identity_new.md) - Generate a new identity + diff --git a/docs/defradb/references/cli/defradb_identity_new.md b/docs/defradb/references/cli/defradb_identity_new.md new file mode 100644 index 0000000..0fc3a73 --- /dev/null +++ b/docs/defradb/references/cli/defradb_identity_new.md @@ -0,0 +1,53 @@ +## defradb identity new + +Generate a new identity + +### Synopsis + +Generate a new identity + +The generated identity contains: +- A secp256k1 private key that is a 256-bit big-endian binary-encoded number, +padded to a length of 32 bytes in HEX format. +- A compressed 33-byte secp256k1 public key in HEX format. +- A "did:key" generated from the public key. + +Example: generate a new identity: + defradb identity new + + + +``` +defradb identity new [flags] +``` + +### Options + +``` + -h, --help help for new +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb identity](defradb_identity.md) - Interact with identity features of DefraDB instance + diff --git a/docs/defradb/references/cli/defradb_keyring.md b/docs/defradb/references/cli/defradb_keyring.md new file mode 100644 index 0000000..362273c --- /dev/null +++ b/docs/defradb/references/cli/defradb_keyring.md @@ -0,0 +1,57 @@ +## defradb keyring + +Manage DefraDB private keys + +### Synopsis + +Manage DefraDB private keys. +Generate, import, and export private keys. + +The following keys are loaded from the keyring on start: + peer-key: Ed25519 private key (required) + encryption-key: AES-128, AES-192, or AES-256 key (optional) + +To randomly generate the required keys, run the following command: + defradb keyring generate + +To import externally generated keys, run the following command: + defradb keyring import \ \ + +To learn more about the available options: + defradb keyring --help + + +### Options + +``` + -h, --help help for keyring +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb](defradb.md) - DefraDB Edge Database +* [defradb keyring export](defradb_keyring_export.md) - Export a private key +* [defradb keyring generate](defradb_keyring_generate.md) - Generate private keys +* [defradb keyring import](defradb_keyring_import.md) - Import a private key +* [defradb keyring list](defradb_keyring_list.md) - List all keys in the keyring + diff --git a/docs/defradb/references/cli/defradb_keyring_export.md b/docs/defradb/references/cli/defradb_keyring_export.md new file mode 100644 index 0000000..083654b --- /dev/null +++ b/docs/defradb/references/cli/defradb_keyring_export.md @@ -0,0 +1,50 @@ +## defradb keyring export + +Export a private key + +### Synopsis + +Export a private key. +Prints the hexadecimal representation of a private key. + +The DEFRA_KEYRING_SECRET environment variable must be set to unlock the keyring. +This can also be done with a .env file in the working directory or at a path +defined with the --secret-file flag. + +Example: + defradb keyring export encryption-key + +``` +defradb keyring export [flags] +``` + +### Options + +``` + -h, --help help for export +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb keyring](defradb_keyring.md) - Manage DefraDB private keys + diff --git a/docs/defradb/references/cli/defradb_keyring_generate.md b/docs/defradb/references/cli/defradb_keyring_generate.md new file mode 100644 index 0000000..9c7b99b --- /dev/null +++ b/docs/defradb/references/cli/defradb_keyring_generate.md @@ -0,0 +1,64 @@ +## defradb keyring generate + +Generate private keys + +### Synopsis + +Generate private keys. +Randomly generate and store private keys in the keyring. +By default peer and encryption keys will be generated. + +The DEFRA_KEYRING_SECRET environment variable must be set to unlock the keyring. +This can also be done with a .env file in the working directory or at a path +defined with the --secret-file flag. + +WARNING: This will overwrite existing keys in the keyring. + +Example: + defradb keyring generate + +Example: with no encryption key + defradb keyring generate --no-encryption + +Example: with no peer key + defradb keyring generate --no-peer-key + +Example: with system keyring + defradb keyring generate --keyring-backend system + +``` +defradb keyring generate [flags] +``` + +### Options + +``` + -h, --help help for generate + --no-encryption Skip generating an encryption key. Encryption at rest will be disabled + --no-peer-key Skip generating a peer key. +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb keyring](defradb_keyring.md) - Manage DefraDB private keys + diff --git a/docs/defradb/references/cli/defradb_keyring_import.md b/docs/defradb/references/cli/defradb_keyring_import.md new file mode 100644 index 0000000..d6a275d --- /dev/null +++ b/docs/defradb/references/cli/defradb_keyring_import.md @@ -0,0 +1,50 @@ +## defradb keyring import + +Import a private key + +### Synopsis + +Import a private key. +Store an externally generated key in the keyring. + +The DEFRA_KEYRING_SECRET environment variable must be set to unlock the keyring. +This can also be done with a .env file in the working directory or at a path +defined with the --secret-file flag. + +Example: + defradb keyring import encryption-key 0000000000000000 + +``` +defradb keyring import [flags] +``` + +### Options + +``` + -h, --help help for import +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb keyring](defradb_keyring.md) - Manage DefraDB private keys + diff --git a/docs/defradb/references/cli/defradb_keyring_list.md b/docs/defradb/references/cli/defradb_keyring_list.md new file mode 100644 index 0000000..fb43aef --- /dev/null +++ b/docs/defradb/references/cli/defradb_keyring_list.md @@ -0,0 +1,48 @@ +## defradb keyring list + +List all keys in the keyring + +### Synopsis + +List all keys in the keyring. +The DEFRA_KEYRING_SECRET environment variable must be set to unlock the keyring. +This can also be done with a .env file in the working directory or at a path +defined with the --secret-file flag. + +Example: + defradb keyring list + +``` +defradb keyring list [flags] +``` + +### Options + +``` + -h, --help help for list +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb keyring](defradb_keyring.md) - Manage DefraDB private keys + diff --git a/docs/defradb/references/cli/defradb_server-dump.md b/docs/defradb/references/cli/defradb_server-dump.md new file mode 100644 index 0000000..3aafdcf --- /dev/null +++ b/docs/defradb/references/cli/defradb_server-dump.md @@ -0,0 +1,38 @@ +## defradb server-dump + +Dumps the state of the entire database + +``` +defradb server-dump [flags] +``` + +### Options + +``` + -h, --help help for server-dump +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb](defradb.md) - DefraDB Edge Database + diff --git a/docs/defradb/references/cli/defradb_start.md b/docs/defradb/references/cli/defradb_start.md new file mode 100644 index 0000000..5aea7e8 --- /dev/null +++ b/docs/defradb/references/cli/defradb_start.md @@ -0,0 +1,55 @@ +## defradb start + +Start a DefraDB node + +### Synopsis + +Start a DefraDB node. + +``` +defradb start [flags] +``` + +### Options + +``` + --allowed-origins stringArray List of origins to allow for CORS requests + --development Enables a set of features that make development easier but should not be enabled in production: + - allows purging of all persisted data + - generates temporary node identity if keyring is disabled + -h, --help help for start + --max-txn-retries int Specify the maximum number of retries per transaction (default 5) + --no-encryption Skip generating an encryption key. Encryption at rest will be disabled. WARNING: This cannot be undone. + --no-p2p Disable the peer-to-peer network synchronization system + --p2paddr strings Listen addresses for the p2p network (formatted as a libp2p MultiAddr) (default [/ip4/127.0.0.1/tcp/9171]) + --peers stringArray List of peers to connect to + --privkeypath string Path to the private key for tls + --pubkeypath string Path to the public key for tls + --store string Specify the datastore to use (supported: badger, memory) (default "badger") + --valuelogfilesize int Specify the datastore value log file size (in bytes). In memory size will be 2*valuelogfilesize (default 1073741824) +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb](defradb.md) - DefraDB Edge Database + diff --git a/docs/defradb/references/cli/defradb_version.md b/docs/defradb/references/cli/defradb_version.md new file mode 100644 index 0000000..fdd5010 --- /dev/null +++ b/docs/defradb/references/cli/defradb_version.md @@ -0,0 +1,40 @@ +## defradb version + +Display the version information of DefraDB and its components + +``` +defradb version [flags] +``` + +### Options + +``` + -f, --format string Version output format. Options are text, json + --full Display the full version information + -h, --help help for version +``` + +### Options inherited from parent commands + +``` + --keyring-backend string Keyring backend to use. Options are file or system (default "file") + --keyring-namespace string Service name to use when using the system backend (default "defradb") + --keyring-path string Path to store encrypted keys when using the file backend (default "keys") + --log-format string Log format to use. Options are text or json (default "text") + --log-level string Log level to use. Options are debug, info, error, fatal (default "info") + --log-output string Log output path. Options are stderr or stdout. (default "stderr") + --log-overrides string Logger config overrides. Format ,=,...;,... + --log-source Include source location in logs + --log-stacktrace Include stacktrace in error and fatal logs + --no-keyring Disable the keyring and generate ephemeral keys + --no-log-color Disable colored log output + --rootdir string Directory for persistent data (default: $HOME/.defradb) + --secret-file string Path to the file containing secrets (default ".env") + --source-hub-address string The SourceHub address authorized by the client to make SourceHub transactions on behalf of the actor + --url string URL of HTTP endpoint to listen on or connect to (default "127.0.0.1:9181") +``` + +### SEE ALSO + +* [defradb](defradb.md) - DefraDB Edge Database + diff --git a/docs/defradb/references/config.md b/docs/defradb/references/config.md new file mode 100644 index 0000000..e3733e7 --- /dev/null +++ b/docs/defradb/references/config.md @@ -0,0 +1,171 @@ +--- +sidebar_label: Config +sidebar_position: 1 +--- + +# DefraDB configuration (YAML) + +The default DefraDB directory is `$HOME/.defradb`. It can be changed via the --rootdir CLI flag. + +Relative paths are interpreted as being rooted in the DefraDB directory. + +## `development` + +Enables a set of features that make development easier but should not be enabled in production. + +## `datastore.store` + +Store can be badger or memory. Defaults to `badger`. + +- badger: fast pure Go key-value store optimized for SSDs (https://github.com/dgraph-io/badger) +- memory: in-memory version of badger + +## `datastore.maxtxnretries` + +The number of retries to make in the event of a transaction conflict. Defaults to `5`. + +Currently this is only used within the P2P system and will not affect operations initiated by users. + +## `datastore.noencryption` + +Skip generating an encryption key. Encryption at rest will be disabled. **WARNING**: This cannot be undone. + +## `datastore.badger.path` + +The path to the database data file(s). Defaults to `data`. + +## `datastore.badger.valuelogfilesize` + +Maximum file size of the value log files. + +## `api.address` + +Address of the HTTP API to listen on or connect to. Defaults to `127.0.0.1:9181`. + +## `api.allowed-origins` + +The list of origins a cross-domain request can be executed from. + +## `api.pubkeypath` + +The path to the public key file for TLS / HTTPS. + +## `api.privkeypath` + +The path to the private key file for TLS / HTTPS. + +## `net.p2pdisabled` + +Whether P2P networking is disabled. Defaults to `false`. + +## `net.p2paddresses` + +List of addresses for the P2P network to listen on. Defaults to `/ip4/127.0.0.1/tcp/9171`. + +## `net.pubsubenabled` + +Whether PubSub is enabled. Defaults to `true`. + +## `net.peers` + +List of peers to boostrap with, specified as multiaddresses. + +https://docs.libp2p.io/concepts/addressing/ + +## `net.relay` + +Enable libp2p's Circuit relay transport protocol. Defaults to `false`. + +https://docs.libp2p.io/concepts/circuit-relay/ + +## `log.level` + +Log level to use. Options are `info` or `error`. Defaults to `info`. + +## `log.output` + +Log output path. Options are `stderr` or `stdout`. Defaults to `stderr`. + +## `log.format` + +Log format to use. Options are `text` or `json`. Defaults to `text`. + +## `log.stacktrace` + +Include stacktrace in error and fatal logs. Defaults to `false`. + +## `log.source` + +Include source location in logs. Defaults to `false`. + +## `log.overrides` + +Logger config overrides. Format `,=,...;,...`. + +## `log.colordisabled` + +Disable colored log output. Defaults to `false`. + +## `keyring.path` + +Path to store encrypted key files in. Defaults to `keys`. + +## `keyring.disabled` + +Disable the keyring and generate ephemeral keys instead. Defaults to `false`. + +## `keyring.namespace` + +The service name to use when using the system keyring. Defaults to `defradb`. + +## `keyring.backend` + +Keyring backend to use. Defaults to `file`. + +- `file` Stores keys in encrypted files +- `system` Stores keys in the OS managed keyring + +## `lens.runtime` + +The LensVM wasm runtime to run lens modules in. + +Possible values: +- `wasm-time` (default): https://github.com/bytecodealliance/wasmtime-go +- `wasmer` (windows not supported): https://github.com/wasmerio/wasmer-go +- `wazero`: https://github.com/tetratelabs/wazero + +## `acp.type` + +The type of ACP module to use. + +Possible values: +- `none` (default): No ACP +- `local` local-only ACP +- `source-hub` source hub ACP: https://github.com/sourcenetwork/sourcehub + +## `acp.sourceHub.ChainID` + +The ID of the SourceHub chain to store ACP data in. Required when using `acp.type`:`source-hub`. + +## `acp.sourceHub.GRPCAddress` + +The address of the SourceHub GRPC server. Required when using `acp.type`:`source-hub`. + +## `acp.sourceHub.CometRPCAddress` + +The address of the SourceHub Comet RPC server. Required when using `acp.type`:`source-hub`. + +## `acp.sourceHub.KeyName` + +The name of the key in the keyring where the SourceHub credentials used to sign (and pay for) SourceHub +transactions created by the node is stored. Required when using `acp.type`:`source-hub`. + +## `acp.sourceHub.address` + +The SourceHub address of the actor that client-side actions should permit to make SourceHub actions on +their behalf. This is a client-side only config param. It is required if the client wishes to make +SourceHub ACP requests in order to create protected data. + +## `secretfile` + +Path to the file containing secrets. Defaults to `.env`. diff --git a/docs/references/query-specification/_category_.json b/docs/defradb/references/query-specification/_category_.json similarity index 100% rename from docs/references/query-specification/_category_.json rename to docs/defradb/references/query-specification/_category_.json diff --git a/docs/references/query-specification/aggregate-functions.md b/docs/defradb/references/query-specification/aggregate-functions.md similarity index 100% rename from docs/references/query-specification/aggregate-functions.md rename to docs/defradb/references/query-specification/aggregate-functions.md diff --git a/docs/references/query-specification/aliases.md b/docs/defradb/references/query-specification/aliases.md similarity index 88% rename from docs/references/query-specification/aliases.md rename to docs/defradb/references/query-specification/aliases.md index acd5db1..32be263 100644 --- a/docs/references/query-specification/aliases.md +++ b/docs/defradb/references/query-specification/aliases.md @@ -8,7 +8,7 @@ If the structure of a returned query is not ideal for a given application, you c ```graphql { - topTenBooks: Books(sort: {rating: DESC}, limit: 10) { + topTenBooks: Books(order: {rating: DESC}, limit: 10) { title genre description @@ -20,13 +20,13 @@ In the above example, the books result is renamed to `topTenBooks`, which can be ```graphql { - topTenBooks: Books(sort: {rating: DESC}, limit: 10) { + topTenBooks: Books(order: {rating: DESC}, limit: 10) { title genre description } - bottomTenBooks: Books(sort: {rating: ASC}, limit: 10) { + bottomTenBooks: Books(order: {rating: ASC}, limit: 10) { title genre description diff --git a/docs/references/query-specification/collections.md b/docs/defradb/references/query-specification/collections.md similarity index 100% rename from docs/references/query-specification/collections.md rename to docs/defradb/references/query-specification/collections.md diff --git a/docs/references/query-specification/database-api.md b/docs/defradb/references/query-specification/database-api.md similarity index 98% rename from docs/references/query-specification/database-api.md rename to docs/defradb/references/query-specification/database-api.md index e2cec7d..4bf9533 100644 --- a/docs/references/query-specification/database-api.md +++ b/docs/defradb/references/query-specification/database-api.md @@ -45,7 +45,7 @@ type Delta { To query the latest commit of an object (with id: '123'): ```graphql query { - latestCommits(docid: "123") { + latestCommits(docID: "123") { cid height delta { @@ -58,7 +58,7 @@ query { To query all the commits of an object (with id: '123'): ```graphql query { - allCommits(docid: "123") { + allCommits(docID: "123") { cid height delta { diff --git a/docs/references/query-specification/execution-flow.md b/docs/defradb/references/query-specification/execution-flow.md similarity index 100% rename from docs/references/query-specification/execution-flow.md rename to docs/defradb/references/query-specification/execution-flow.md diff --git a/docs/references/query-specification/filtering.md b/docs/defradb/references/query-specification/filtering.md similarity index 100% rename from docs/references/query-specification/filtering.md rename to docs/defradb/references/query-specification/filtering.md diff --git a/docs/references/query-specification/grouping.md b/docs/defradb/references/query-specification/grouping.md similarity index 100% rename from docs/references/query-specification/grouping.md rename to docs/defradb/references/query-specification/grouping.md diff --git a/docs/references/query-specification/limiting-and-pagination.md b/docs/defradb/references/query-specification/limiting-and-pagination.md similarity index 89% rename from docs/references/query-specification/limiting-and-pagination.md rename to docs/defradb/references/query-specification/limiting-and-pagination.md index fd5577e..983c56d 100644 --- a/docs/references/query-specification/limiting-and-pagination.md +++ b/docs/defradb/references/query-specification/limiting-and-pagination.md @@ -9,7 +9,7 @@ After filtering and sorting a query, we can then limit and skip elements from th Let us get the top 10 rated books: ```graphql { - Books(sort: {rating: DESC}, limit: 10) { + Books(order: {rating: DESC}, limit: 10) { title genre description @@ -22,7 +22,7 @@ The `limit` function accepts the maximum number of items to return from the resu Let's get the *next* top 10 rated books after the previous query: ```graphql { - Books(sort: {rating: DESC}, limit:10, offset: 10) { + Books(order: {rating: DESC}, limit: 10, offset: 10) { title genre description diff --git a/docs/references/query-specification/mutation-block.md b/docs/defradb/references/query-specification/mutation-block.md similarity index 74% rename from docs/references/query-specification/mutation-block.md rename to docs/defradb/references/query-specification/mutation-block.md index 99529ce..c22c46d 100644 --- a/docs/references/query-specification/mutation-block.md +++ b/docs/defradb/references/query-specification/mutation-block.md @@ -18,17 +18,17 @@ Insert is used to create new documents from scratch. This involves many necessar type Book { ... } mutation { - create_Book(data: createBookPayload) [Book] + create_Book(input: createBookInput) [Book] } ``` -The above example displays the general structure of an insert mutation. You call the `createTYPE` mutation, with the given data payload. +The above example displays the general structure of an insert mutation. You call the `create_TYPE` mutation, with the given input. -### Payload Format +### Input Object Type -All mutations use a payload to update the data. Unlike the rest of the Query system, mutation payloads aren't typed. Instead, they use a standard JSON Serialization format. Removing the type system from payloads allows flexibility in the system. +All mutations use a typed input object to update the data. -JSON Supports all the same types as DefraDB, and it's familiar for developers. Hence, it is an obvious choice for us. The following is an example with a full type and payload: +The following is an example with a full type and input object: ```graphql type Book { @@ -38,11 +38,11 @@ type Book { } mutation { - create_Book(data: "{ - 'title': 'Painted House', - 'description': 'The story begins as Luke Chandler ...', - 'rating': 4.9 - }") { + create_Book(input: { + title: "Painted House", + description: "The story begins as Luke Chandler ...", + rating: 4.9 + }) { title description rating @@ -65,32 +65,30 @@ Update filters use the same format and types from the Query system. Hence, it ea The structure of the generated update mutation for a `Book` type is given below: ```graphql mutation { - update_Book(dockey: ID, filter: BookFilterArg, data: updateBookPayload) [Book] + update_Book(docID: ID, filter: BookFilterArg, input: updateBookInput) [Book] } ``` See the structure and syntax of the filter query above. You can also see an additional field `id`, thawhich will supersede the `filter`; this makes it easy to update a single document by a given ID. -More important than the Update filter, is the update payload. Currently all update payloads use the `JSON Merge Patch` system. - -`JSON Merge Patch` is very similar to a traditional JSON object, with a few semantic differences that are important for Updates. The most significant aspect is how to remove or delete a field value in a document. To remove a `JSON Merge Patch` field. we provide a `nil` value in the JSON object. +The input object type is the same for both `update_TYPE` and `create_TYPE` mutations. Here's an example. ```json { - "name": "John", - "rating": nil + name: "John", + rating: nil } ``` -This Merge Patch sets the `name` field to "John" and deletes the `rating` field value. +This update sets the `name` field to "John" and deletes the `rating` field value. Once we create our update, and select which document(s) to update, we can query the new state of all documents affected by the mutation. This is because our update mutation returns the type it mutates. A basic example is provided below: ```graphql mutation { - update_Book(dockey: '123', data: "{'name': 'John'}") { + update_Book(docID: '123', input: {name: "John"}) { _key name } @@ -104,7 +102,7 @@ Beyond updating by an ID or IDs, we can use a query filter to select which field ```graphql mutation { - update_Book(filter: {rating: {_le: 1.0}}, data: "{'rating': '1.5'}") { + update_Book(filter: {rating: {_le: 1.0}}, input: {rating: 1.5}) { _key rating name @@ -126,14 +124,14 @@ The document selection interface is identical to the `Update` system. Much like The structure of the generated delete mutation for a `Book` type is given below: ```graphql mutation { - delete_Book(dockey: ID, ids: [ID], filter: BookFilterArg) [Book] + delete_Book(docID: ID, ids: [ID], filter: BookFilterArg) [Book] } ``` Here, we can delete a document with ID '123': ```graphql mutation { - delete_User(dockey: '123') { + delete_User(docID: '123') { _key name } diff --git a/docs/references/query-specification/query-block.md b/docs/defradb/references/query-specification/query-block.md similarity index 100% rename from docs/references/query-specification/query-block.md rename to docs/defradb/references/query-specification/query-block.md diff --git a/docs/references/query-specification/query-language-overview.md b/docs/defradb/references/query-specification/query-language-overview.md similarity index 100% rename from docs/references/query-specification/query-language-overview.md rename to docs/defradb/references/query-specification/query-language-overview.md diff --git a/docs/references/query-specification/relationships.md b/docs/defradb/references/query-specification/relationships.md similarity index 53% rename from docs/references/query-specification/relationships.md rename to docs/defradb/references/query-specification/relationships.md index bfecbf3..b17f667 100644 --- a/docs/references/query-specification/relationships.md +++ b/docs/defradb/references/query-specification/relationships.md @@ -8,7 +8,8 @@ DefraDB supports a number of common relational models that an application may ne Relationships are defined through the Document Schemas, using a series of GraphQL directives, and inferencing. They are always defined on both sides of the relation, meaning both objects involved in the relationship. -#### One-to-One +## One-to-One + The simplest relationship is a "one-to-one" which directly maps one document to another. The code below defines a one-to-one relationship between the `Author` and their `Address`: ```graphql @@ -27,6 +28,7 @@ type Address { ``` The types of both objects are included and DefraDB infers the relationship. As a result: + - Both objects which can be queried separately. - Each object provides field level access to its related object. @@ -34,7 +36,8 @@ The notable distinction of "one-to-one" relationships is that only the DocKey of On the other hand, if you simply embed the Address within the Author type without the internal relational system, you can include the `@embed` directive, which will embed it within. Objects embedded inside another using the `@embed` directive do not expose a query endpoint, so they can *only* be accessed through their parent object. Additionally they are not assigned a DocKey. -#### One-to-Many +## One-to-Many + A "one-to-many" relationship allows us to relate several objects of one type, to a single instance of another. Let us define a one-to-many relationship between an author and their books below. This example differs from the above relationship example because we relate the author to an array of books, instead of a single address. @@ -55,22 +58,117 @@ type Book { In this case, the books object is defined within the Author object to be an array of books, indicating that *one* Author type has a relationship to *many* Book types. Internally, much like the one-to-one model, only the DocKeys are stored. However, the DocKey is only stored on one side of the relationship (the child type). In this example, only the Book type keeps a reference to its associated Author DocKey. -#### Many-to-Many +## Many-to-Many -*to be updated* +A "many-to-many" relationship allows multiple instances of one type to be related to multiple instances of another type. In DefraDB, many-to-many relationships are implemented using an explicit join type that connects the two related types. Unlike one-to-one or one-to-many relationships that are automatically managed, many-to-many relationships require an intermediary join type to be explicitly defined. -#### Multiple Relationships +Let us define a many-to-many relationship between students and courses below. A student can enroll in many courses, and a course can have many students enrolled. + +```graphql +type Student { + name: String + age: Int + enrollment: [Enrollment] +} + +type Course { + title: String + code: String + enrollment: [Enrollment] +} + +type Enrollment { + student: Student @relation(name: "student_enrollments") + course: Course @relation(name: "course_enrollments") +} +``` + +In this example, the `Enrollment` type acts as the join type that creates the many-to-many relationship between `Student` and `Course`. The join type has a one-to-many relationship with both Student and Course. Each enrollment links one student to one course. The `@relation` directive with unique names ensures that the relationships are properly distinguished. + +Similar to traditional SQL databases, you define the join type manually in your schema as a regular type. However, DefraDB automatically handles the relationship management between the join type and the related types, reducing the complexity of maintaining these connections. + +You can query the relationships directly from either the `Student` or `Course` type, or through the intermediary `Enrollment` type. + +```graphql +# Get students with their enrolled courses +query { + Student { + _docID + name + enrollment { + course { + title + code + } + } + } +} +``` + +```graphql +# Get courses with their enrolled students +query { + Course { + _docID + title + enrollment { + student { + name + age + } + } + } +} +``` + +You can also query the join type directly: + +```graphql +# Get all enrollments with student and course details +query { + Enrollment { + student { + name + age + } + course { + title + code + } + } +} +``` + +DefraDB handles the traversal through the join type automatically, allowing you to express complex many-to-many queries in a single request, but it still must go through the join type. + +The join type can also include additional fields specific to the relationship, such as enrollment date, grade, or status: + +```graphql +type Enrollment { + student: Student @relation(name: "student_enrollments") + course: Course @relation(name: "course_enrollments") + enrollmentDate: DateTime + status: String + grade: Float +} +``` + +This pattern allows you to maintain rich, contextual information about the relationship itself, not just the connection between the two types. + +## Multiple Relationships + +It is possible to define a collection of different relationship models. Additionally, we can define multiple relationships within a single type. Relationships containing unique types can simply be added to the types without issue. Like the following: -It is possible to define a collection of different relationship models. Additionally, we can define multiple relationships within a single type. Relationships containing unique types, can simply be added to the types without issue. Like the following: ```graphql type Author { name: String address: Address - books: [Book] @relation("authored_books") @index + books: [Book] @relation(name: "authored_books") @index } ``` However, in case of multiple relationships using the *same* types, you have to annotate the differences. You can use the `@relation` directive to be explicit. + ```graphql type Author { name: String @@ -86,4 +184,4 @@ type Book { } ``` -Here we have two relations of the same type. By default, their association would conflict because internally, type names are used to specify relations. We use the `@relation` to add a custom name to the relation. `@relation` can be added to any relationship, even if it's a duplicate type relationship. It exists to be explicit, and to change the default parameters of the relation. \ No newline at end of file +Here we have two relations of the same type. By default, their association would conflict because internally, type names are used to specify relations. We use the `@relation` to add a custom name to the relation. `@relation` can be added to any relationship, even if it's a duplicate type relationship. It exists to be explicit, and to change the default parameters of the relation. diff --git a/docs/references/query-specification/sorting-and-ordering.md b/docs/defradb/references/query-specification/sorting-and-ordering.md similarity index 87% rename from docs/references/query-specification/sorting-and-ordering.md rename to docs/defradb/references/query-specification/sorting-and-ordering.md index 444a244..7da53e3 100644 --- a/docs/references/query-specification/sorting-and-ordering.md +++ b/docs/defradb/references/query-specification/sorting-and-ordering.md @@ -25,7 +25,7 @@ Sorting can be applied to multiple fields in the same query. The sort order is s The query below finds all books ordered by earliest published date and then by descending order of titles. ```graphql { - Books(order: { published_at: ASC, title: DESC }) { + Books(order: [{ published_at: ASC }, { title: DESC }] }) { title genre description @@ -38,7 +38,7 @@ Additionally, you can sort sub-object fields along with root object fields. The query below finds all books ordered by earliest published date and then by the latest authors' birthday. ```graphql { - Books(order: { published_at: ASC, Author: { birthday: DESC }}) { + Books(order: [{ published_at: ASC }, { Author: { birthday: DESC } }]) { title description published_at @@ -63,7 +63,7 @@ If the DocKey is included in the sort fields, any field included afterwards will *So, instead of:* ```graphql { - Authors(order: { name: DESC, Books: { title: ASC }}) { + Authors(order: [{ name: DESC }, { Books: { title: ASC } }]) { name Books { title @@ -114,7 +114,7 @@ If you have the following objects in the database: > and the following query ```graphql { - Authors(order: { name: DESC, books: { title: ASC }}) { + Authors(order: [{ name: DESC }, { books: { title: ASC } }]) { name books { title @@ -123,14 +123,6 @@ If you have the following objects in the database: } ``` -```graphql -Books(filter: {_id: [1]}) { - title - genre - description -} -``` - -> Given there are two authors with the same name (John Grisham), the sort object `(sort: { name: "desc", Books: { title: "asc" }}` would suggest we sort duplicate authors using `Books: { title: "asc" }` as the secondary sort field. However, because the books field is an array of objects, there is no single value for the title to compare easily. +> Given there are two authors with the same name (John Grisham), the sort object `(sort: { name: DESC, books: { title: ASC }}` would suggest we sort duplicate authors using `books: { title: ASC }` as the secondary sort field. However, because the books field is an array of objects, there is no single value for the title to compare easily. > > Therefore, sorting on array sub objects from the root field is ***strictly not allowed***. diff --git a/docs/defradb/release notes/_category_.json b/docs/defradb/release notes/_category_.json new file mode 100644 index 0000000..40addbd --- /dev/null +++ b/docs/defradb/release notes/_category_.json @@ -0,0 +1,5 @@ +{ + "label": "Release Notes", + "position": 4 + } + \ No newline at end of file diff --git a/docs/defradb/release notes/v0.10.0.md b/docs/defradb/release notes/v0.10.0.md new file mode 100644 index 0000000..c5471bc --- /dev/null +++ b/docs/defradb/release notes/v0.10.0.md @@ -0,0 +1,45 @@ +--- +sidebar_position: 100 +--- +# v0.10.0 + +> 2024-03-08 + +## Changelog +DefraDB v0.10 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.9.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. + +### Features +* feat: Add JSON scalar ([#2254](https://github.com/sourcenetwork/defradb/issues/2254)) +* feat: Add case insensitive `like` operator ([#2368](https://github.com/sourcenetwork/defradb/issues/2368)) +* feat: Add composite indexes ([#2226](https://github.com/sourcenetwork/defradb/issues/2226)) +* feat: Add support for views with Lens transforms ([#2311](https://github.com/sourcenetwork/defradb/issues/2311)) +* feat: Allow setting null values on doc fields ([#2273](https://github.com/sourcenetwork/defradb/issues/2273)) +* feat: Generate OpenAPI command ([#2235](https://github.com/sourcenetwork/defradb/issues/2235)) +* feat: Model Col. SchemaVersions and migrations on Cols ([#2286](https://github.com/sourcenetwork/defradb/issues/2286)) +* feat: Multiple docs with nil value on unique-indexed field ([#2276](https://github.com/sourcenetwork/defradb/issues/2276)) +* feat: Replace FieldDescription.RelationType with IsPrimary ([#2288](https://github.com/sourcenetwork/defradb/issues/2288)) +* feat: Reverted order for indexed fields ([#2335](https://github.com/sourcenetwork/defradb/issues/2335)) +* feat: Rework GetCollection/SchemaByFoo funcs into single ([#2319](https://github.com/sourcenetwork/defradb/issues/2319)) +### Fix +* fix: Add `latest` image tag for ghcr ([#2340](https://github.com/sourcenetwork/defradb/issues/2340)) +* fix: Add missing delta payload ([#2306](https://github.com/sourcenetwork/defradb/issues/2306)) +* fix: Add missing directive definitions ([#2369](https://github.com/sourcenetwork/defradb/issues/2369)) +* fix: Add validation to JSON fields ([#2375](https://github.com/sourcenetwork/defradb/issues/2375)) +* fix: Fix compound relational filters in aggregates ([#2297](https://github.com/sourcenetwork/defradb/issues/2297)) +* fix: Load root dir before loading config ([#2266](https://github.com/sourcenetwork/defradb/issues/2266)) +* fix: Make peers sync secondary index ([#2390](https://github.com/sourcenetwork/defradb/issues/2390)) +* fix: Make returned collections respect explicit transactions ([#2385](https://github.com/sourcenetwork/defradb/issues/2385)) +* fix: Mark docs as deleted when querying in delete mut ([#2298](https://github.com/sourcenetwork/defradb/issues/2298)) +* fix: Move field id off of schema ([#2336](https://github.com/sourcenetwork/defradb/issues/2336)) +* fix: Update GetCollections behaviour ([#2378](https://github.com/sourcenetwork/defradb/issues/2378)) +### Refactoring +* refactor: Decouple net config ([#2258](https://github.com/sourcenetwork/defradb/issues/2258)) +* refactor: Generate field ids using a sequence ([#2339](https://github.com/sourcenetwork/defradb/issues/2339)) +* refactor: HTTP config ([#2278](https://github.com/sourcenetwork/defradb/issues/2278)) +* refactor: Make CollectionDescription.Name Option ([#2223](https://github.com/sourcenetwork/defradb/issues/2223)) +* refactor: Make config internal to CLI ([#2310](https://github.com/sourcenetwork/defradb/issues/2310)) +* refactor: Node config ([#2296](https://github.com/sourcenetwork/defradb/issues/2296) +* refactor: Remove unused Delete field from client.Document ([#2275](https://github.com/sourcenetwork/defradb/issues/2275)) + diff --git a/docs/defradb/release notes/v0.11.0.md b/docs/defradb/release notes/v0.11.0.md new file mode 100644 index 0000000..816e487 --- /dev/null +++ b/docs/defradb/release notes/v0.11.0.md @@ -0,0 +1,41 @@ +--- +sidebar_position: 110 +--- +# v0.11.0 + +> 2024-05-06 + +## Changelog +DefraDB v0.11 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.10.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. + +### Features +* feat: Add Access Control Policy ([#2338](https://github.com/sourcenetwork/defradb/issues/2338)) +* feat: Add Defra-Lens support for branching schema ([#2421](https://github.com/sourcenetwork/defradb/issues/2421)) +* feat: Add P Counter CRDT ([#2482](https://github.com/sourcenetwork/defradb/issues/2482)) +* feat: Add PatchCollection ([#2402](https://github.com/sourcenetwork/defradb/issues/2402)) +* feat: Allow mutation of col sources via PatchCollection ([#2424](https://github.com/sourcenetwork/defradb/issues/2424)) +* feat: Force explicit primary decl. in SDL for one-ones ([#2462](https://github.com/sourcenetwork/defradb/issues/2462)) +* feat: Lens runtime config ([#2497](https://github.com/sourcenetwork/defradb/issues/2497)) +* feat: Move relation field properties onto collection ([#2529](https://github.com/sourcenetwork/defradb/issues/2529)) +* feat: Update corelog to 0.0.7 ([#2547](https://github.com/sourcenetwork/defradb/issues/2547)) +### Fix +* fix: Add check to filter result for logical ops ([#2573](https://github.com/sourcenetwork/defradb/issues/2573)) +* fix: Allow update when updating non-indexed field ([#2511](https://github.com/sourcenetwork/defradb/issues/2511)) +* fix: Handle compound filters on related indexed fields ([#2575](https://github.com/sourcenetwork/defradb/issues/2575)) +* fix: Make all array kinds nillable ([#2534](https://github.com/sourcenetwork/defradb/issues/2534)) +* fix: Return correct results from one-many indexed filter ([#2579](https://github.com/sourcenetwork/defradb/issues/2579)) +### Documentation +* docs: Add data definition document ([#2544](https://github.com/sourcenetwork/defradb/issues/2544)) +### Refactoring +* refactor: Add NormalValue ([#2404](https://github.com/sourcenetwork/defradb/issues/2404)) +* refactor: Clean up client/request package ([#2443](https://github.com/sourcenetwork/defradb/issues/2443)) +* refactor: DB transactions context ([#2513](https://github.com/sourcenetwork/defradb/issues/2513)) +* refactor: Merge collection UpdateWith and DeleteWith ([#2531](https://github.com/sourcenetwork/defradb/issues/2531)) +* refactor: Replace logging package with corelog ([#2406](https://github.com/sourcenetwork/defradb/issues/2406)) +* refactor: Rewrite convertImmutable ([#2445](https://github.com/sourcenetwork/defradb/issues/2445)) +* refactor: Unify Field Kind and Schema properties ([#2414](https://github.com/sourcenetwork/defradb/issues/2414)) +### Testing +* test: Add flag to skip network tests ([#2495](https://github.com/sourcenetwork/defradb/issues/2495)) + diff --git a/docs/defradb/release notes/v0.12.0.md b/docs/defradb/release notes/v0.12.0.md new file mode 100644 index 0000000..bd39a45 --- /dev/null +++ b/docs/defradb/release notes/v0.12.0.md @@ -0,0 +1,60 @@ +--- +sidebar_position: 120 +--- +# v0.12.0 + +> 2024-06-28 + +## Changelog +DefraDB v0.12 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.11.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. + +### Features +* feat: Ability to generate a new identity (#2760) +* feat: Add async transaction callbacks (#2708) +* feat: Add authentication for ACP (#2649) +* feat: Allow lens runtime selection via config (#2684) +* feat: Enable sec. indexes with ACP (#2602) +* feat: Inject ACP instance into the DB instance (#2633) +* feat: Keyring (#2557) +* feat: Sec. indexes on relations (#2670) +### Fix +* fix: Add version check in basicTxn.Query (#2742) +* fix: Allow primary field declarations on one-many (#2796) +* fix: Change new identity keys to hex format (#2773) +* fix: Incorporate schema root into docID (#2701) +* fix: Keyring output (#2784) +* fix: Make node options composable (#2648) +* fix: Merge retry logic (#2719) +* fix: Race condition when testing CLI (#2713) +* fix: Remove limit for fetching secondary docs (#2594) +* fix: Remove shared mutable state between database instances (#2777) +* fix: Resolve incorrect merge conflict (#2723) +* fix: Return slice of correct length from db.AddSchema (#2765) +* fix: Use node representation for Block (#2746) +### Documentation +* docs: Add http/openapi documentation & ci workflow (#2678) +* docs: Document Event Update struct (#2598) +* docs: Remove reference to client ping from readme (#2793) +* docs: Streamline cli documentation (#2646) +### Refactoring +* refactor: Change counters to support encryption (#2698) +* refactor: Change from protobuf to cbor for IPLD (#2604) +* refactor: Change local_acp implementation to use acp_core (#2691) +* refactor: DAG sync and move merge outside of net package (#2658) +* refactor: Extract Defra specific logic from ACPLocal type (#2656) +* refactor: Extract definition stuff from collection.go (#2706) +* refactor: Move internal packages to internal dir (#2599) +* refactor: Reorganize global CLI flags (#2615) +* refactor: Replace subscription events publisher (#2686) +* refactor: Rework definition validation (#2720) +* refactor: Use events to test network logic (#2700) +### Testing +* test: Add relation substitute mechanic to tests (#2682) +* test: Allow assertion of AddSchema results (#2788) +* test: Allow test harness to execute benchmarks (#2740) +* test: Remove duplicate test (#2787) +* test: Support asserting on doc index in test results (#2786) +* test: Test node pkg constructor via integration test suite (#2641) + diff --git a/docs/defradb/release notes/v0.13.0.md b/docs/defradb/release notes/v0.13.0.md new file mode 100644 index 0000000..f73f73a --- /dev/null +++ b/docs/defradb/release notes/v0.13.0.md @@ -0,0 +1,40 @@ +--- +sidebar_position: 130 +--- +# v0.13.0 + +> 2024-08-23 + +## Changelog + +DefraDB v0.13 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.12.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. + +### Features +* feat: Doc encryption with symmetric key (#2731) +* feat: Doc field encryption (#2817) +* feat: Enable indexing for DateTime fields (#2933) +* feat: Handle P2P with SourceHub ACP (#2848) +* feat: Implement SourceHub ACP (#2657) +* feat: Remove IsObjectArray (#2859) +### Fix +* fix: Add ns precision support to time values (#2940) +* fix: Allow querying of 9th, 19th, 29th, etc collections (#2819) +* fix: Create mutation introspection (#2881) +* fix: Enable filtering doc by fields of JSON and Blob types (#2841) +* fix: Filter with date and document with nil date value (#2946) +* fix: Handle index queries where child found without parent (#2942) +* fix: Handle multiple child index joins (#2867) +* fix: No panic if filter condition on indexed field is empty (#2929) +* fix: Panic with different composite-indexed child objects (#2947) +* fix: Support one-many self joins without primary directive (#2799) +### Refactoring +* refactor: Decouple client.DB from net (#2768) +* refactor: GQL responses (#2872) +* refactor: Network test sync logic (#2748) +### Testing +* test: Add assert on DocIndex for child documents (#2871) +* test: Fix refreshing of docs in change detector (#2832) +* test: Remove hardcoded test identities (#2822) + diff --git a/docs/defradb/release notes/v0.14.0.md b/docs/defradb/release notes/v0.14.0.md new file mode 100644 index 0000000..1395cba --- /dev/null +++ b/docs/defradb/release notes/v0.14.0.md @@ -0,0 +1,64 @@ +--- +sidebar_position: 140 +--- +# v0.14.0 + +> 2024-10-19 + +## Changelog + +DefraDB v0.14 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.13.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.gg/w7jYQVJ/. + +### Features +* feat: JSON type filter ([#3122](https://github.com/sourcenetwork/defradb/issues/3122)) +* feat: Add replicator retry ([#3107](https://github.com/sourcenetwork/defradb/issues/3107)) +* feat: Inherit `read` permission if only `write` access ([#3108](https://github.com/sourcenetwork/defradb/issues/3108)) +* feat: JSON type coercion ([#3098](https://github.com/sourcenetwork/defradb/issues/3098)) +* feat: Ability to unrelate private documents from actors ([#3099](https://github.com/sourcenetwork/defradb/issues/3099)) +* feat: Enable Indexing of array fields ([#3092](https://github.com/sourcenetwork/defradb/issues/3092)) +* feat: Min and max numerical aggregates ([#3078](https://github.com/sourcenetwork/defradb/issues/3078)) +* feat: Ability to relate private documents to actors ([#2907](https://github.com/sourcenetwork/defradb/issues/2907)) +* feat: GraphQL upsert mutation ([#3075](https://github.com/sourcenetwork/defradb/issues/3075)) +* feat: GraphQL fragments ([#3066](https://github.com/sourcenetwork/defradb/issues/3066)) +* feat: Secure document encryption key exchange ([#2891](https://github.com/sourcenetwork/defradb/issues/2891)) +* feat: Inline array filters ([#3028](https://github.com/sourcenetwork/defradb/issues/3028)) +* feat: CLI purge command ([#2998](https://github.com/sourcenetwork/defradb/issues/2998)) +* feat: Add support for one sided relations ([#3021](https://github.com/sourcenetwork/defradb/issues/3021)) +* feat: Add materialized views ([#3000](https://github.com/sourcenetwork/defradb/issues/3000)) +* feat: Default scalar field values ([#2997](https://github.com/sourcenetwork/defradb/issues/2997)) +* feat: GQL variables and operation name ([#2993](https://github.com/sourcenetwork/defradb/issues/2993)) + +### Fixes +* fix: Make GraphQL errors spec compliant ([#3040](https://github.com/sourcenetwork/defradb/issues/3040)) +* fix: Ignore badger path if in-memory ([#2967](https://github.com/sourcenetwork/defradb/issues/2967)) +* fix: Rework relation field kinds ([#2961](https://github.com/sourcenetwork/defradb/issues/2961)) +* fix: Panic with filter on unique composite index on relation ([#3020](https://github.com/sourcenetwork/defradb/issues/3020)) +* fix: Handle missing type in an SDL ([#3023](https://github.com/sourcenetwork/defradb/issues/3023)) +* fix: GraphQL null argument parsing ([#3013](https://github.com/sourcenetwork/defradb/issues/3013)) +* fix: Prevent mutations from secondary side of relation ([#3124](https://github.com/sourcenetwork/defradb/issues/3124)) +* fix: Treat explicitly set nil values like omitted values ([#3101](https://github.com/sourcenetwork/defradb/issues/3101)) +* fix: Remove duplication of block heads on delete ([#3096](https://github.com/sourcenetwork/defradb/issues/3096)) +* fix: Log GQL endpoint correctly on node start ([#3037](https://github.com/sourcenetwork/defradb/issues/3037)) +* fix: Panic with different composite-indexed child objects ([#2947](https://github.com/sourcenetwork/defradb/issues/2947)) +* fix: Validate GraphQL schemas ([#3152](https://github.com/sourcenetwork/defradb/issues/3152)) +* fix: Queries with filter on 2 rel fields of composite index ([#3035](https://github.com/sourcenetwork/defradb/issues/3035)) + +### Documentation +* doc: Rename _key to _docID in docs ([#2989](https://github.com/sourcenetwork/defradb/issues/2989)) + +### Refactoring +* refactor: Change from protobuf to cbor for gRPC ([#3061](https://github.com/sourcenetwork/defradb/issues/3061)) +* refactor: GraphQL order input ([#3044](https://github.com/sourcenetwork/defradb/issues/3044)) +* refactor: Merge duplicate input args ([#3046](https://github.com/sourcenetwork/defradb/issues/3046)) +* refactor: Index field directive ([#2994](https://github.com/sourcenetwork/defradb/issues/2994)) +* refactor: Make SourceHub dep internal-only ([#2963](https://github.com/sourcenetwork/defradb/issues/2963)) + +### Testing +* test: Add bug bash tests for gql fragments ([#3136](https://github.com/sourcenetwork/defradb/issues/3136)) + +### Chore +* chore: Make keyring non-interactive ([#3026](https://github.com/sourcenetwork/defradb/issues/3026)) +* chore: Change from ipld traversal to direct link access ([#2931](https://github.com/sourcenetwork/defradb/issues/2931)) +* chore: Bump to GoLang v1.22 ([#2913](https://github.com/sourcenetwork/defradb/issues/2913)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.15.0.md b/docs/defradb/release notes/v0.15.0.md new file mode 100644 index 0000000..c848e94 --- /dev/null +++ b/docs/defradb/release notes/v0.15.0.md @@ -0,0 +1,44 @@ +--- +sidebar_position: 150 +--- +# v0.15.0 + +> 2024-12-13 + +## Changelog +DefraDB v0.15 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.14.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.gg/w7jYQVJ/. + +### Features +* feat: Add ACP to pubsub KMS (#3206) +* feat: Add ability to add/delete relationship for all actors (#3254) +* feat: Add node identity (#3125) +* feat: Add support for branchable collection time-traveling (#3260) +* feat: Add support for branchable collections (#3216) +* feat: Add support for cid-only time travel queries (#3256) +* feat: Aggregate filter alias targeting (#3252) +* feat: Aggregate order alias targeting (#3293) +* feat: Error if purge request made with dev mode disabled (#3295) +* feat: Filter alias target (#3201) +* feat: Order alias target (#3217) +* feat: Support for descending fields CLI index creation (#3237) +### Fix +* fix: Add Authorization header to CORS allowed headers (#3178) +* fix: Add support for operationName and variables in HTTP GET (#3292) +* fix: Adjust OpenAPI index POST example request body (#3268) +* fix: Make requests with no identity work with "*" target (#3278) +* fix: Prevent over span (#3258) +* fix: Resolve CORS errors in OpenAPI tab of Playground (#3263) +### Documentation +* docs: Update discord link (#3231) +### Refactoring +* refactor: Add unified JSON interface (#3265) +* refactor: Breakup core/keys.go file (#3198) +* refactor: Consolidate node-related fields into a struct (#3232) +* refactor: Remove indirection from crdt packages (#3192) +* refactor: Rework core.Spans (#3210) +* refactor: Simplify merkle/crdt code (#3200) +### Testing +* test: Allow soft-referencing of Cids in tests (#3176) + diff --git a/docs/defradb/release notes/v0.2.0.md b/docs/defradb/release notes/v0.2.0.md new file mode 100644 index 0000000..ae66bf9 --- /dev/null +++ b/docs/defradb/release notes/v0.2.0.md @@ -0,0 +1,86 @@ +--- +sidebar_position: 20 +--- + +# v0.2.0 + +> 2022-02-07 + +DefraDB v0.2 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +This release is jam-packed with new features and a small number of breaking changes. Read the full changelog for a detailed description. Most notable features include a new Peer-to-Peer (P2P) data synchronization system, an expanded query system to support GroupBy & Aggregate operations, and lastly TimeTraveling queries allowing to query previous states of a document. + +Much more than just that has been added to ensure we're building reliable software expected of any database, such as expanded test & benchmark suites, automated bug detection, performance gains, and more. + +This release does include a Breaking Change to existing v0.1 databases regarding the internal data model, which affects the "Content Identifiers" we use to generate DocKeys and VersionIDs. If you need help migrating an existing deployment, reach out at hello@source.network or join our Discord at https://discord.source.network. + +### Features + +* Added Peer-to-Peer networking data synchronization ([#177](https://github.com/sourcenetwork/defradb/issues/177)) +* TimeTraveling (History Traversing) query engine and doc fetcher ([#59](https://github.com/sourcenetwork/defradb/issues/59)) +* Add Document Deletion with a Key ([#150](https://github.com/sourcenetwork/defradb/issues/150)) +* Add support for sum aggregate ([#121](https://github.com/sourcenetwork/defradb/issues/121)) +* Add support for lwwr scalar arrays (full replace on update) ([#115](https://github.com/sourcenetwork/defradb/issues/115)) +* Add count aggregate support ([#102](https://github.com/sourcenetwork/defradb/issues/102)) +* Add support for named relationships ([#108](https://github.com/sourcenetwork/defradb/issues/108)) +* Add multi doc key lookup support ([#76](https://github.com/sourcenetwork/defradb/issues/76)) +* Add basic group by functionality ([#43](https://github.com/sourcenetwork/defradb/issues/43)) +* Update datastore packages to allow use of context ([#48](https://github.com/sourcenetwork/defradb/issues/48)) + +### Bug fixes + +* Only add join if aggregating child object collection ([#188](https://github.com/sourcenetwork/defradb/issues/188)) +* Handle errors generated during input object thunks ([#123](https://github.com/sourcenetwork/defradb/issues/123)) +* Remove new types from in-memory cache on generate error ([#122](https://github.com/sourcenetwork/defradb/issues/122)) +* Support relationships where both fields have the same name ([#109](https://github.com/sourcenetwork/defradb/issues/109)) +* Handle errors generated in fields thunk ([#66](https://github.com/sourcenetwork/defradb/issues/66)) +* Ensure OperationDefinition case has at least one selection([#24](https://github.com/sourcenetwork/defradb/pull/24)) +* Close datastore iterator on scan close ([#56](https://github.com/sourcenetwork/defradb/pull/56)) (resulted in a panic when using limit) +* Close superseded iterators before orphaning ([#56](https://github.com/sourcenetwork/defradb/pull/56)) (fixes a panic in the join code) +* Move discard to after error check ([#88](https://github.com/sourcenetwork/defradb/pull/88)) (did result in panic if transaction creation fails) +* Check for nil iterator before closing document fetcher ([#108](https://github.com/sourcenetwork/defradb/pull/108)) + +### Tooling +* Added benchmark suite ([#160](https://github.com/sourcenetwork/defradb/issues/160)) + +### Documentation + +* Correcting comment typos ([#142](https://github.com/sourcenetwork/defradb/issues/142)) +* Correcting README typos ([#140](https://github.com/sourcenetwork/defradb/issues/140)) + +### Testing + +* Add transaction integration tests ([#175](https://github.com/sourcenetwork/defradb/issues/175)) +* Allow running of tests using badger-file as well as IM options ([#128](https://github.com/sourcenetwork/defradb/issues/128)) +* Add test datastore selection support ([#88](https://github.com/sourcenetwork/defradb/issues/88)) + +### Refactoring + +* Datatype modification protection ([#138](https://github.com/sourcenetwork/defradb/issues/138)) +* Cleanup Linter Complaints and Setup Makefile ([#63](https://github.com/sourcenetwork/defradb/issues/63)) +* Rework document rendering to avoid data duplication and mutation ([#68](https://github.com/sourcenetwork/defradb/issues/68)) +* Remove dependency on concrete datastore implementations from db package ([#51](https://github.com/sourcenetwork/defradb/issues/51)) +* Remove all `errors.Wrap` and update them with `fmt.Errorf`. ([#41](https://github.com/sourcenetwork/defradb/issues/41)) +* Restructure integration tests to provide better visibility ([#15](https://github.com/sourcenetwork/defradb/pull/15)) +* Remove schemaless code branches ([#23](https://github.com/sourcenetwork/defradb/pull/23)) + +### Performance +* Add badger multi scan support ([#85](https://github.com/sourcenetwork/defradb/pull/85)) +* Add support for range spans ([#86](https://github.com/sourcenetwork/defradb/pull/86)) + +### Continous integration + +* Use more accurate test coverage. ([#134](https://github.com/sourcenetwork/defradb/issues/134)) +* Disable Codecov's Patch Check +* Make codcov less strict for now to unblock development ([#125](https://github.com/sourcenetwork/defradb/issues/125)) +* Add codecov config file. ([#118](https://github.com/sourcenetwork/defradb/issues/118)) +* Add workflow that runs a job on AWS EC2 instance. ([#110](https://github.com/sourcenetwork/defradb/issues/110)) +* Add Code Test Coverage with CodeCov ([#116](https://github.com/sourcenetwork/defradb/issues/116)) +* Integrate GitHub Action for golangci-lint Annotations ([#106](https://github.com/sourcenetwork/defradb/issues/106)) +* Add Linter Check to CircleCi ([#92](https://github.com/sourcenetwork/defradb/issues/92)) + +### Chore + +* Remove the S1038 rule of the gosimple linter. ([#129](https://github.com/sourcenetwork/defradb/issues/129)) +* Update to badger v3, and use badger as default in memory store ([#56](https://github.com/sourcenetwork/defradb/issues/56)) +* Make Cid versions consistent ([#57](https://github.com/sourcenetwork/defradb/issues/57)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.2.1.md b/docs/defradb/release notes/v0.2.1.md new file mode 100644 index 0000000..e72ea06 --- /dev/null +++ b/docs/defradb/release notes/v0.2.1.md @@ -0,0 +1,57 @@ +--- +sidebar_position: 21 +--- + +# v0.2.1 + +> 2022-03-04 + +### Features + +* Add ability to delete multiple documents using filter ([#206](https://github.com/sourcenetwork/defradb/issues/206)) +* Add ability to delete multiple documents, using multiple ids ([#196](https://github.com/sourcenetwork/defradb/issues/196)) + +### Fixes + +* Concurrency control of Document using RWMutex ([#213](https://github.com/sourcenetwork/defradb/issues/213)) +* Only log errors and above when benchmarking ([#261](https://github.com/sourcenetwork/defradb/issues/261)) +* Handle proper type conversion on sort nodes ([#228](https://github.com/sourcenetwork/defradb/issues/228)) +* Return empty array if no values found ([#223](https://github.com/sourcenetwork/defradb/issues/223)) +* Close fetcher on error ([#210](https://github.com/sourcenetwork/defradb/issues/210)) +* Installing binary using defradb name ([#190](https://github.com/sourcenetwork/defradb/issues/190)) + +### Tooling + +* Add short benchmark runner option ([#263](https://github.com/sourcenetwork/defradb/issues/263)) + +### Documentation + +* Add data format changes documentation folder ([#89](https://github.com/sourcenetwork/defradb/issues/89)) +* Correcting typos ([#143](https://github.com/sourcenetwork/defradb/issues/143)) +* Update generated CLI docs ([#208](https://github.com/sourcenetwork/defradb/issues/208)) +* Updated readme with P2P section ([#220](https://github.com/sourcenetwork/defradb/issues/220)) +* Update old or missing license headers ([#205](https://github.com/sourcenetwork/defradb/issues/205)) +* Update git-chglog config and template ([#195](https://github.com/sourcenetwork/defradb/issues/195)) + +### Refactoring + +* Introduction of logging system ([#67](https://github.com/sourcenetwork/defradb/issues/67)) +* Restructure db/txn/multistore structures ([#199](https://github.com/sourcenetwork/defradb/issues/199)) +* Initialize database in constructor ([#211](https://github.com/sourcenetwork/defradb/issues/211)) +* Purge all println and ban it ([#253](https://github.com/sourcenetwork/defradb/issues/253)) + +### Testing + +* Detect and force breaking filesystem changes to be documented ([#89](https://github.com/sourcenetwork/defradb/issues/89)) +* Boost collection test coverage ([#183](https://github.com/sourcenetwork/defradb/issues/183)) + +### Continuous integration + +* Combine the Lint and Benchmark workflows so that the benchmark job depends on the lint job in one workflow ([#209](https://github.com/sourcenetwork/defradb/issues/209)) +* Add rule to only run benchmark if other check are successful ([#194](https://github.com/sourcenetwork/defradb/issues/194)) +* Increase linter timeout ([#230](https://github.com/sourcenetwork/defradb/issues/230)) + +### Chore + +* Remove commented out code ([#238](https://github.com/sourcenetwork/defradb/issues/238)) +* Remove dead code from multi node ([#186](https://github.com/sourcenetwork/defradb/issues/186)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.3.0.md b/docs/defradb/release notes/v0.3.0.md new file mode 100644 index 0000000..01002c2 --- /dev/null +++ b/docs/defradb/release notes/v0.3.0.md @@ -0,0 +1,178 @@ +--- +sidebar_position: 30 +--- + +# v0.3.0 + +> 2022-08-02 + +DefraDB v0.3 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +There are *several* new features in this release, and we invite you to review the official changelog below. Some highlights are various new features for Grouping & Aggregation for the query system, like top-level aggregation and group filtering. Moreover, a brand new Query Explain system was added to introspect the execution plans created by DefraDB. Lastly we introduced a revamped CLI configuration system. + +This release does include a Breaking Change to existing v0.2.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. + +### Features + +* Add named config overrides ([#659](https://github.com/sourcenetwork/defradb/issues/659)) +* Expose color and caller log options, add validation ([#652](https://github.com/sourcenetwork/defradb/issues/652)) +* Add ability to explain `groupNode` and it's attribute(s). ([#641](https://github.com/sourcenetwork/defradb/issues/641)) +* Add primary directive for schema definitions ([@primary](https://github.com/primary)) ([#650](https://github.com/sourcenetwork/defradb/issues/650)) +* Add support for aggregate filters on inline arrays ([#622](https://github.com/sourcenetwork/defradb/issues/622)) +* Add explainable renderLimitNode & hardLimitNode attributes. ([#614](https://github.com/sourcenetwork/defradb/issues/614)) +* Add support for top level aggregates ([#594](https://github.com/sourcenetwork/defradb/issues/594)) +* Update `countNode` explanation to be consistent. ([#600](https://github.com/sourcenetwork/defradb/issues/600)) +* Add support for stdin as input in CLI ([#608](https://github.com/sourcenetwork/defradb/issues/608)) +* Explain `cid` & `field` attributes for `dagScanNode` ([#598](https://github.com/sourcenetwork/defradb/issues/598)) +* Add ability to explain `dagScanNode` attribute(s). ([#560](https://github.com/sourcenetwork/defradb/issues/560)) +* Add the ability to send user feedback to the console even when logging to file. ([#568](https://github.com/sourcenetwork/defradb/issues/568)) +* Add ability to explain `sortNode` attribute(s). ([#558](https://github.com/sourcenetwork/defradb/issues/558)) +* Add ability to explain `sumNode` attribute(s). ([#559](https://github.com/sourcenetwork/defradb/issues/559)) +* Introduce top-level config package ([#389](https://github.com/sourcenetwork/defradb/issues/389)) +* Add ability to explain `updateNode` attributes. ([#514](https://github.com/sourcenetwork/defradb/issues/514)) +* Add `typeIndexJoin` explainable attributes. ([#499](https://github.com/sourcenetwork/defradb/issues/499)) +* Add support to explain `countNode` attributes. ([#504](https://github.com/sourcenetwork/defradb/issues/504)) +* Add CORS capability to HTTP API ([#467](https://github.com/sourcenetwork/defradb/issues/467)) +* Add explaination of spans for `scanNode`. ([#492](https://github.com/sourcenetwork/defradb/issues/492)) +* Add ability to Explain the response plan. ([#385](https://github.com/sourcenetwork/defradb/issues/385)) +* Add aggregate filter support for groups only ([#426](https://github.com/sourcenetwork/defradb/issues/426)) +* Configurable caller option in logger ([#416](https://github.com/sourcenetwork/defradb/issues/416)) +* Add Average aggregate support ([#383](https://github.com/sourcenetwork/defradb/issues/383)) +* Allow summation of aggregates ([#341](https://github.com/sourcenetwork/defradb/issues/341)) +* Add ability to check DefraDB CLI version. ([#339](https://github.com/sourcenetwork/defradb/issues/339)) + +### Fixes + +* Add a check to ensure limit is not 0 when evaluating query limit and offset ([#706](https://github.com/sourcenetwork/defradb/issues/706)) +* Support multiple `--logger` flags ([#704](https://github.com/sourcenetwork/defradb/issues/704)) +* Return without an error if relation is finalized ([#698](https://github.com/sourcenetwork/defradb/issues/698)) +* Logger not correctly applying named config ([#696](https://github.com/sourcenetwork/defradb/issues/696)) +* Add content-type media type parsing ([#678](https://github.com/sourcenetwork/defradb/issues/678)) +* Remove portSyncLock deadlock condition ([#671](https://github.com/sourcenetwork/defradb/issues/671)) +* Silence cobra default errors and usage printing ([#668](https://github.com/sourcenetwork/defradb/issues/668)) +* Add stdout validation when setting logging output path ([#666](https://github.com/sourcenetwork/defradb/issues/666)) +* Consider `--logoutput` CLI flag properly ([#645](https://github.com/sourcenetwork/defradb/issues/645)) +* Handle errors and responses in CLI `client` commands ([#579](https://github.com/sourcenetwork/defradb/issues/579)) +* Rename aggregate gql types ([#638](https://github.com/sourcenetwork/defradb/issues/638)) +* Error when attempting to insert value into relationship field ([#632](https://github.com/sourcenetwork/defradb/issues/632)) +* Allow adding of new schema to database ([#635](https://github.com/sourcenetwork/defradb/issues/635)) +* Correctly parse dockey in broadcast log event. ([#631](https://github.com/sourcenetwork/defradb/issues/631)) +* Increase system's open files limit in integration tests ([#627](https://github.com/sourcenetwork/defradb/issues/627)) +* Avoid populating `order.ordering` with empties. ([#618](https://github.com/sourcenetwork/defradb/issues/618)) +* Change to supporting of non-null inline arrays ([#609](https://github.com/sourcenetwork/defradb/issues/609)) +* Assert fields exist in collection before saving to them ([#604](https://github.com/sourcenetwork/defradb/issues/604)) +* CLI `init` command to reinitialize only config file ([#603](https://github.com/sourcenetwork/defradb/issues/603)) +* Add config and registry clearing to TestLogWritesMessagesToFeedbackLog ([#596](https://github.com/sourcenetwork/defradb/issues/596)) +* Change `$eq` to `_eq` in the failing test. ([#576](https://github.com/sourcenetwork/defradb/issues/576)) +* Resolve failing HTTP API tests via cleanup ([#557](https://github.com/sourcenetwork/defradb/issues/557)) +* Ensure Makefile compatibility with macOS ([#527](https://github.com/sourcenetwork/defradb/issues/527)) +* Separate out iotas in their own blocks. ([#464](https://github.com/sourcenetwork/defradb/issues/464)) +* Use x/cases for titling instead of strings to handle deprecation ([#457](https://github.com/sourcenetwork/defradb/issues/457)) +* Handle limit and offset in sub groups ([#440](https://github.com/sourcenetwork/defradb/issues/440)) +* Issue preventing DB from restarting with no records ([#437](https://github.com/sourcenetwork/defradb/issues/437)) +* log serving HTTP API before goroutine blocks ([#358](https://github.com/sourcenetwork/defradb/issues/358)) + +### Testing + +* Add integration testing for P2P. ([#655](https://github.com/sourcenetwork/defradb/issues/655)) +* Fix formatting of tests with no extra brackets ([#643](https://github.com/sourcenetwork/defradb/issues/643)) +* Add tests for `averageNode` explain. ([#639](https://github.com/sourcenetwork/defradb/issues/639)) +* Add schema integration tests ([#628](https://github.com/sourcenetwork/defradb/issues/628)) +* Add tests for default properties ([#611](https://github.com/sourcenetwork/defradb/issues/611)) +* Specify which collection to update in test framework ([#601](https://github.com/sourcenetwork/defradb/issues/601)) +* Add tests for grouping by undefined value ([#543](https://github.com/sourcenetwork/defradb/issues/543)) +* Add test for querying undefined field ([#544](https://github.com/sourcenetwork/defradb/issues/544)) +* Expand commit query tests ([#541](https://github.com/sourcenetwork/defradb/issues/541)) +* Add cid (time-travel) query tests ([#539](https://github.com/sourcenetwork/defradb/issues/539)) +* Restructure and expand filter tests ([#512](https://github.com/sourcenetwork/defradb/issues/512)) +* Basic unit testing of `node` package ([#503](https://github.com/sourcenetwork/defradb/issues/503)) +* Test filter in filter tests ([#473](https://github.com/sourcenetwork/defradb/issues/473)) +* Add test for deletion of records in a relationship ([#329](https://github.com/sourcenetwork/defradb/issues/329)) +* Benchmark transaction iteration ([#289](https://github.com/sourcenetwork/defradb/issues/289)) + +### Refactoring + +* Improve CLI error handling and fix small issues ([#649](https://github.com/sourcenetwork/defradb/issues/649)) +* Add top-level `version` package ([#583](https://github.com/sourcenetwork/defradb/issues/583)) +* Remove extra log levels ([#634](https://github.com/sourcenetwork/defradb/issues/634)) +* Change `sortNode` to `orderNode`. ([#591](https://github.com/sourcenetwork/defradb/issues/591)) +* Rework update and delete node to remove secondary planner ([#571](https://github.com/sourcenetwork/defradb/issues/571)) +* Trim imported connor package ([#530](https://github.com/sourcenetwork/defradb/issues/530)) +* Internal doc restructure ([#471](https://github.com/sourcenetwork/defradb/issues/471)) +* Copy-paste connor fork into repo ([#567](https://github.com/sourcenetwork/defradb/issues/567)) +* Add safety to the tests, add ability to catch stderr logs and add output path validation ([#552](https://github.com/sourcenetwork/defradb/issues/552)) +* Change handler functions implementation and response formatting ([#498](https://github.com/sourcenetwork/defradb/issues/498)) +* Improve the HTTP API implementation ([#382](https://github.com/sourcenetwork/defradb/issues/382)) +* Use new logger in net/api ([#420](https://github.com/sourcenetwork/defradb/issues/420)) +* Rename NewCidV1_SHA2_256 to mixedCaps ([#415](https://github.com/sourcenetwork/defradb/issues/415)) +* Remove utils package ([#397](https://github.com/sourcenetwork/defradb/issues/397)) +* Rework planNode Next and Value(s) function ([#374](https://github.com/sourcenetwork/defradb/issues/374)) +* Restructure aggregate query syntax ([#373](https://github.com/sourcenetwork/defradb/issues/373)) +* Remove dead code from client package and document remaining ([#356](https://github.com/sourcenetwork/defradb/issues/356)) +* Restructure datastore keys ([#316](https://github.com/sourcenetwork/defradb/issues/316)) +* Add commits lost during github outage ([#303](https://github.com/sourcenetwork/defradb/issues/303)) +* Move public members out of core and base packages ([#295](https://github.com/sourcenetwork/defradb/issues/295)) +* Make db stuff internal/private ([#291](https://github.com/sourcenetwork/defradb/issues/291)) +* Rework client.DB to ensure interface contains only public types ([#277](https://github.com/sourcenetwork/defradb/issues/277)) +* Remove GetPrimaryIndexDocKey from collection interface ([#279](https://github.com/sourcenetwork/defradb/issues/279)) +* Remove DataStoreKey from (public) dockey struct ([#278](https://github.com/sourcenetwork/defradb/issues/278)) +* Renormalize to ensure consistent file line termination. ([#226](https://github.com/sourcenetwork/defradb/issues/226)) +* Strongly typed key refactor ([#17](https://github.com/sourcenetwork/defradb/issues/17)) + +### Documentation + +* Use permanent link to BSL license document ([#692](https://github.com/sourcenetwork/defradb/issues/692)) +* README update v0.3.0 ([#646](https://github.com/sourcenetwork/defradb/issues/646)) +* Improve code documentation ([#533](https://github.com/sourcenetwork/defradb/issues/533)) +* Add CONTRIBUTING.md ([#531](https://github.com/sourcenetwork/defradb/issues/531)) +* Add package level docs for logging lib ([#338](https://github.com/sourcenetwork/defradb/issues/338)) + +### Tooling + +* Include all touched packages in code coverage ([#673](https://github.com/sourcenetwork/defradb/issues/673)) +* Use `gotestsum` over `go test` ([#619](https://github.com/sourcenetwork/defradb/issues/619)) +* Update Github pull request template ([#524](https://github.com/sourcenetwork/defradb/issues/524)) +* Fix the cross-build script ([#460](https://github.com/sourcenetwork/defradb/issues/460)) +* Add test coverage html output ([#466](https://github.com/sourcenetwork/defradb/issues/466)) +* Add linter rule for `goconst`. ([#398](https://github.com/sourcenetwork/defradb/issues/398)) +* Add github PR template. ([#394](https://github.com/sourcenetwork/defradb/issues/394)) +* Disable auto-fixing linter issues by default ([#429](https://github.com/sourcenetwork/defradb/issues/429)) +* Fix linting of empty `else` code blocks ([#402](https://github.com/sourcenetwork/defradb/issues/402)) +* Add the `gofmt` linter rule. ([#405](https://github.com/sourcenetwork/defradb/issues/405)) +* Cleanup linter config file ([#400](https://github.com/sourcenetwork/defradb/issues/400)) +* Add linter rule for copyright headers ([#360](https://github.com/sourcenetwork/defradb/issues/360)) +* Organize our config files and tooling. ([#336](https://github.com/sourcenetwork/defradb/issues/336)) +* Limit line length to 100 characters (linter check) ([#224](https://github.com/sourcenetwork/defradb/issues/224)) +* Ignore db/tests folder and the bench marks. ([#280](https://github.com/sourcenetwork/defradb/issues/280)) + +### Continuous Integration + +* Fix circleci cache permission errors. ([#371](https://github.com/sourcenetwork/defradb/issues/371)) +* Ban extra elses ([#366](https://github.com/sourcenetwork/defradb/issues/366)) +* Fix change-detection to not fail when new tests are added. ([#333](https://github.com/sourcenetwork/defradb/issues/333)) +* Update golang-ci linter and explicit go-setup to use v1.17 ([#331](https://github.com/sourcenetwork/defradb/issues/331)) +* Comment the benchmarking result comparison to the PR ([#305](https://github.com/sourcenetwork/defradb/issues/305)) +* Add benchmark performance comparisons ([#232](https://github.com/sourcenetwork/defradb/issues/232)) +* Add caching / storing of bench report on default branch ([#290](https://github.com/sourcenetwork/defradb/issues/290)) +* Ensure full-benchmarks are ran on a PR-merge. ([#282](https://github.com/sourcenetwork/defradb/issues/282)) +* Add ability to control benchmarks by PR labels. ([#267](https://github.com/sourcenetwork/defradb/issues/267)) + +### Chore + +* Update APL to refer to D2 Foundation ([#711](https://github.com/sourcenetwork/defradb/issues/711)) +* Update gitignore to include `cmd` folders ([#617](https://github.com/sourcenetwork/defradb/issues/617)) +* Enable random execution order of tests ([#554](https://github.com/sourcenetwork/defradb/issues/554)) +* Enable linters exportloopref, nolintlint, whitespace ([#535](https://github.com/sourcenetwork/defradb/issues/535)) +* Add utility for generation of man pages ([#493](https://github.com/sourcenetwork/defradb/issues/493)) +* Add Dockerfile ([#517](https://github.com/sourcenetwork/defradb/issues/517)) +* Enable errorlint linter ([#520](https://github.com/sourcenetwork/defradb/issues/520)) +* Binaries in`cmd` folder, examples in `examples` folder ([#501](https://github.com/sourcenetwork/defradb/issues/501)) +* Improve log outputs ([#506](https://github.com/sourcenetwork/defradb/issues/506)) +* Move testing to top-level `tests` folder ([#446](https://github.com/sourcenetwork/defradb/issues/446)) +* Update dependencies ([#450](https://github.com/sourcenetwork/defradb/issues/450)) +* Update go-ipfs-blockstore and ipfs-lite ([#436](https://github.com/sourcenetwork/defradb/issues/436)) +* Update libp2p dependency to v0.19 ([#424](https://github.com/sourcenetwork/defradb/issues/424)) +* Update ioutil package to io / os packages. ([#376](https://github.com/sourcenetwork/defradb/issues/376)) +* git ignore vscode ([#343](https://github.com/sourcenetwork/defradb/issues/343)) +* Updated README.md contributors section ([#292](https://github.com/sourcenetwork/defradb/issues/292)) +* Update changelog v0.2.1 ([#252](https://github.com/sourcenetwork/defradb/issues/252)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.3.1.md b/docs/defradb/release notes/v0.3.1.md new file mode 100644 index 0000000..1d54ada --- /dev/null +++ b/docs/defradb/release notes/v0.3.1.md @@ -0,0 +1,94 @@ +--- +sidebar_position: 31 +--- +# v0.3.1 + +> 2022-09-23 + +DefraDB v0.3.1 is a minor release, primarily focusing on additional/extended features and fixes of items added in the `v0.3.0` release. + +### Features + +* Add cid support for allCommits ([#857](https://github.com/sourcenetwork/defradb/issues/857)) +* Add offset support to allCommits ([#859](https://github.com/sourcenetwork/defradb/issues/859)) +* Add limit support to allCommits query ([#856](https://github.com/sourcenetwork/defradb/issues/856)) +* Add order support to allCommits ([#845](https://github.com/sourcenetwork/defradb/issues/845)) +* Display CLI usage on user error ([#819](https://github.com/sourcenetwork/defradb/issues/819)) +* Add support for dockey filters in child joins ([#806](https://github.com/sourcenetwork/defradb/issues/806)) +* Add sort support for numeric aggregates ([#786](https://github.com/sourcenetwork/defradb/issues/786)) +* Allow filtering by nil ([#789](https://github.com/sourcenetwork/defradb/issues/789)) +* Add aggregate offset support ([#778](https://github.com/sourcenetwork/defradb/issues/778)) +* Remove filter depth limit ([#777](https://github.com/sourcenetwork/defradb/issues/777)) +* Add support for and-or inline array aggregate filters ([#779](https://github.com/sourcenetwork/defradb/issues/779)) +* Add limit support for aggregates ([#771](https://github.com/sourcenetwork/defradb/issues/771)) +* Add support for inline arrays of nillable types ([#759](https://github.com/sourcenetwork/defradb/issues/759)) +* Create errors package ([#548](https://github.com/sourcenetwork/defradb/issues/548)) +* Add ability to display peer id ([#719](https://github.com/sourcenetwork/defradb/issues/719)) +* Add a config option to set the vlog max file size ([#743](https://github.com/sourcenetwork/defradb/issues/743)) +* Explain `topLevelNode` like a `MultiNode` plan ([#749](https://github.com/sourcenetwork/defradb/issues/749)) +* Make `topLevelNode` explainable ([#737](https://github.com/sourcenetwork/defradb/issues/737)) + +### Fixes + +* Order subtype without selecting the join child ([#810](https://github.com/sourcenetwork/defradb/issues/810)) +* Correctly handles nil one-one joins ([#837](https://github.com/sourcenetwork/defradb/issues/837)) +* Reset scan node for each join ([#828](https://github.com/sourcenetwork/defradb/issues/828)) +* Handle filter input field argument being nil ([#787](https://github.com/sourcenetwork/defradb/issues/787)) +* Ensure CLI outputs JSON to stdout when directed to pipe ([#804](https://github.com/sourcenetwork/defradb/issues/804)) +* Error if given the wrong side of a one-one relationship ([#795](https://github.com/sourcenetwork/defradb/issues/795)) +* Add object marker to enable return of empty docs ([#800](https://github.com/sourcenetwork/defradb/issues/800)) +* Resolve the extra `typeIndexJoin`s for `_avg` aggregate ([#774](https://github.com/sourcenetwork/defradb/issues/774)) +* Remove _like filter operator ([#797](https://github.com/sourcenetwork/defradb/issues/797)) +* Remove having gql types ([#785](https://github.com/sourcenetwork/defradb/issues/785)) +* Error if child _group selected without parent groupBy ([#781](https://github.com/sourcenetwork/defradb/issues/781)) +* Error nicely on missing field specifier ([#782](https://github.com/sourcenetwork/defradb/issues/782)) +* Handle order input field argument being nil ([#701](https://github.com/sourcenetwork/defradb/issues/701)) +* Change output to outputpath in config file template for logger ([#716](https://github.com/sourcenetwork/defradb/issues/716)) +* Delete mutations not correct persisting all keys ([#731](https://github.com/sourcenetwork/defradb/issues/731)) + +### Tooling + +* Ban the usage of `ioutil` package ([#747](https://github.com/sourcenetwork/defradb/issues/747)) +* Migrate from CircleCi to GitHub Actions ([#679](https://github.com/sourcenetwork/defradb/issues/679)) + +### Documentation + +* Clarify meaning of url param, update in-repo CLI docs ([#814](https://github.com/sourcenetwork/defradb/issues/814)) +* Disclaimer of exposed to network and not encrypted ([#793](https://github.com/sourcenetwork/defradb/issues/793)) +* Update logo to respect theme ([#728](https://github.com/sourcenetwork/defradb/issues/728)) + +### Refactoring + +* Replace all `interface{}` with `any` alias ([#805](https://github.com/sourcenetwork/defradb/issues/805)) +* Use fastjson to parse mutation data string ([#772](https://github.com/sourcenetwork/defradb/issues/772)) +* Rework limit node flow ([#767](https://github.com/sourcenetwork/defradb/issues/767)) +* Make Option immutable ([#769](https://github.com/sourcenetwork/defradb/issues/769)) +* Rework sum and count nodes to make use of generics ([#757](https://github.com/sourcenetwork/defradb/issues/757)) +* Remove some possible panics from codebase ([#732](https://github.com/sourcenetwork/defradb/issues/732)) +* Change logging calls to use feedback in CLI package ([#714](https://github.com/sourcenetwork/defradb/issues/714)) + +### Testing + +* Add tests for aggs with nil filters ([#813](https://github.com/sourcenetwork/defradb/issues/813)) +* Add not equals filter tests ([#798](https://github.com/sourcenetwork/defradb/issues/798)) +* Fix `cli/peerid_test` to not clash addresses ([#766](https://github.com/sourcenetwork/defradb/issues/766)) +* Add change detector summary to test readme ([#754](https://github.com/sourcenetwork/defradb/issues/754)) +* Add tests for inline array grouping ([#752](https://github.com/sourcenetwork/defradb/issues/752)) + +### Continuous integration + +* Reduce test resource usage and test with file db ([#791](https://github.com/sourcenetwork/defradb/issues/791)) +* Add makefile target to verify the local module cache ([#775](https://github.com/sourcenetwork/defradb/issues/775)) +* Allow PR titles to end with a number ([#745](https://github.com/sourcenetwork/defradb/issues/745)) +* Add a workflow to validate pull request titles ([#734](https://github.com/sourcenetwork/defradb/issues/734)) +* Fix the linter version to `v1.47` ([#726](https://github.com/sourcenetwork/defradb/issues/726)) + +### Chore + +* Remove file system paths from resulting executable ([#831](https://github.com/sourcenetwork/defradb/issues/831)) +* Add goimports linter for consistent imports ordering ([#816](https://github.com/sourcenetwork/defradb/issues/816)) +* Improve UX by providing more information ([#802](https://github.com/sourcenetwork/defradb/issues/802)) +* Change to defra errors and handle errors stacktrace ([#794](https://github.com/sourcenetwork/defradb/issues/794)) +* Clean up `go.mod` with pruned module graphs ([#756](https://github.com/sourcenetwork/defradb/issues/756)) +* Update to v0.20.3 of libp2p ([#740](https://github.com/sourcenetwork/defradb/issues/740)) +* Bump to GoLang `v1.18` ([#721](https://github.com/sourcenetwork/defradb/issues/721)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.4.0.md b/docs/defradb/release notes/v0.4.0.md new file mode 100644 index 0000000..24ec532 --- /dev/null +++ b/docs/defradb/release notes/v0.4.0.md @@ -0,0 +1,80 @@ +--- +sidebar_position: 40 +--- + +# v0.4.0 + +> 2023-12-23 + +DefraDB v0.4 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +There are various new features in this release - some of which are breaking - and we invite you to review the official changelog below. Some highlights are persistence of replicators, DateTime scalars, TLS support, and GQL subscriptions. + +This release does include a Breaking Change to existing v0.3.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. + +### Features + +* Add basic metric functionality ([#971](https://github.com/sourcenetwork/defradb/issues/971)) +* Add thread safe transactional in-memory datastore ([#947](https://github.com/sourcenetwork/defradb/issues/947)) +* Persist p2p replicators ([#960](https://github.com/sourcenetwork/defradb/issues/960)) +* Add DateTime custom scalars ([#931](https://github.com/sourcenetwork/defradb/issues/931)) +* Add GraphQL subscriptions ([#934](https://github.com/sourcenetwork/defradb/issues/934)) +* Add support for tls ([#885](https://github.com/sourcenetwork/defradb/issues/885)) +* Add group by support for commits ([#887](https://github.com/sourcenetwork/defradb/issues/887)) +* Add depth support for commits ([#889](https://github.com/sourcenetwork/defradb/issues/889)) +* Make dockey optional for allCommits queries ([#847](https://github.com/sourcenetwork/defradb/issues/847)) +* Add WithStack to the errors package ([#870](https://github.com/sourcenetwork/defradb/issues/870)) +* Add event system ([#834](https://github.com/sourcenetwork/defradb/issues/834)) + +### Fixes + +* Correct errors.WithStack behaviour ([#984](https://github.com/sourcenetwork/defradb/issues/984)) +* Correctly handle nested one to one joins ([#964](https://github.com/sourcenetwork/defradb/issues/964)) +* Do not assume parent record exists when joining ([#963](https://github.com/sourcenetwork/defradb/issues/963)) +* Change time format for HTTP API log ([#910](https://github.com/sourcenetwork/defradb/issues/910)) +* Error if group select contains non-group-by fields ([#898](https://github.com/sourcenetwork/defradb/issues/898)) +* Add inspection of values for ENV flags ([#900](https://github.com/sourcenetwork/defradb/issues/900)) +* Remove panics from document ([#881](https://github.com/sourcenetwork/defradb/issues/881)) +* Add __typename support ([#871](https://github.com/sourcenetwork/defradb/issues/871)) +* Handle subscriber close ([#877](https://github.com/sourcenetwork/defradb/issues/877)) +* Publish update events post commit ([#866](https://github.com/sourcenetwork/defradb/issues/866)) + +### Refactoring + +* Make rootstore require Batching and TxnDatastore ([#940](https://github.com/sourcenetwork/defradb/issues/940)) +* Conceptually clarify schema vs query-language ([#924](https://github.com/sourcenetwork/defradb/issues/924)) +* Decouple db.db from gql ([#912](https://github.com/sourcenetwork/defradb/issues/912)) +* Merkle clock heads cleanup ([#918](https://github.com/sourcenetwork/defradb/issues/918)) +* Simplify dag fetcher ([#913](https://github.com/sourcenetwork/defradb/issues/913)) +* Cleanup parsing logic ([#909](https://github.com/sourcenetwork/defradb/issues/909)) +* Move planner outside the gql directory ([#907](https://github.com/sourcenetwork/defradb/issues/907)) +* Refactor commit nodes ([#892](https://github.com/sourcenetwork/defradb/issues/892)) +* Make latest commits syntax sugar ([#890](https://github.com/sourcenetwork/defradb/issues/890)) +* Remove commit query ([#841](https://github.com/sourcenetwork/defradb/issues/841)) + +### Testing + +* Add event tests ([#965](https://github.com/sourcenetwork/defradb/issues/965)) +* Add new setup for testing explain functionality ([#949](https://github.com/sourcenetwork/defradb/issues/949)) +* Add txn relation-type delete and create tests ([#875](https://github.com/sourcenetwork/defradb/issues/875)) +* Skip change detection for tests that assert panic ([#883](https://github.com/sourcenetwork/defradb/issues/883)) + +### Continuous integration + +* Bump all gh-action versions to support node16 ([#990](https://github.com/sourcenetwork/defradb/issues/990)) +* Bump ssh-agent action to v0.7.0 ([#978](https://github.com/sourcenetwork/defradb/issues/978)) +* Add error message format check ([#901](https://github.com/sourcenetwork/defradb/issues/901)) + +### Chore + +* Extract (events, merkle) errors to errors.go ([#973](https://github.com/sourcenetwork/defradb/issues/973)) +* Extract (datastore, db) errors to errors.go ([#969](https://github.com/sourcenetwork/defradb/issues/969)) +* Extract (connor, crdt, core) errors to errors.go ([#968](https://github.com/sourcenetwork/defradb/issues/968)) +* Extract inline (http and client) errors to errors.go ([#967](https://github.com/sourcenetwork/defradb/issues/967)) +* Update badger version ([#966](https://github.com/sourcenetwork/defradb/issues/966)) +* Move Option and Enumerable to immutables ([#939](https://github.com/sourcenetwork/defradb/issues/939)) +* Add configuration of external loggers ([#942](https://github.com/sourcenetwork/defradb/issues/942)) +* Strip DSKey prefixes and simplify NewDataStoreKey ([#944](https://github.com/sourcenetwork/defradb/issues/944)) +* Include version metadata in cross-building ([#930](https://github.com/sourcenetwork/defradb/issues/930)) +* Update to v0.23.2 the libP2P package ([#908](https://github.com/sourcenetwork/defradb/issues/908)) +* Remove `ipfslite` dependency ([#739](https://github.com/sourcenetwork/defradb/issues/739)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.5.0.md b/docs/defradb/release notes/v0.5.0.md new file mode 100644 index 0000000..96e8499 --- /dev/null +++ b/docs/defradb/release notes/v0.5.0.md @@ -0,0 +1,144 @@ +--- +sidebar_position: 50 +--- + +# v0.5.0 + +> 2023-04-12 + +DefraDB v0.5 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +There many new features in this release, but most importantly, this is the first open source release for DefraDB. As such, this release focused on various quality of life changes and refactors, bug fixes, and overall cleanliness of the repo so it can effectively be used and tested in the public domain. + +To get a full outline of the changes, we invite you to review the official changelog below. Some highlights are the first iteration of our schema update system, allowing developers to add new fields to schemas using our JSON Patch based DDL, a new DAG based delete system which will persist "soft-delete" ops into the CRDT Merkle DAG, and a early prototype for our collection level peer-to-peer synchronization. + +This release does include a Breaking Change to existing v0.4.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. + +### Features + +* Add document delete mechanics ([#1263](https://github.com/sourcenetwork/defradb/issues/1263)) +* Ability to explain an executed request ([#1188](https://github.com/sourcenetwork/defradb/issues/1188)) +* Add SchemaPatch CLI command ([#1250](https://github.com/sourcenetwork/defradb/issues/1250)) +* Add support for one-one mutation from sec. side ([#1247](https://github.com/sourcenetwork/defradb/issues/1247)) +* Store only key in DAG instead of dockey path ([#1245](https://github.com/sourcenetwork/defradb/issues/1245)) +* Add collectionId field to commit field ([#1235](https://github.com/sourcenetwork/defradb/issues/1235)) +* Add field kind substitution for PatchSchema ([#1223](https://github.com/sourcenetwork/defradb/issues/1223)) +* Add dockey field for commit field ([#1216](https://github.com/sourcenetwork/defradb/issues/1216)) +* Allow new fields to be added locally to schema ([#1139](https://github.com/sourcenetwork/defradb/issues/1139)) +* Add `like` sub-string filter ([#1091](https://github.com/sourcenetwork/defradb/issues/1091)) +* Add ability for P2P to wait for pushlog by peer ([#1098](https://github.com/sourcenetwork/defradb/issues/1098)) +* Add P2P collection topic subscription ([#1086](https://github.com/sourcenetwork/defradb/issues/1086)) +* Add support for schema version id in queries ([#1067](https://github.com/sourcenetwork/defradb/issues/1067)) +* Add schema version id to commit queries ([#1061](https://github.com/sourcenetwork/defradb/issues/1061)) +* Persist schema version at time of commit ([#1055](https://github.com/sourcenetwork/defradb/issues/1055)) +* Add ability to input simple explain type arg ([#1039](https://github.com/sourcenetwork/defradb/issues/1039)) + +### Fixes + +* API address parameter validation ([#1311](https://github.com/sourcenetwork/defradb/issues/1311)) +* Improve error message for NonNull GQL types ([#1333](https://github.com/sourcenetwork/defradb/issues/1333)) +* Handle panics in the rpc server ([#1330](https://github.com/sourcenetwork/defradb/issues/1330)) +* Handle returned error in select.go ([#1329](https://github.com/sourcenetwork/defradb/issues/1329)) +* Resolve handful of CLI issues ([#1318](https://github.com/sourcenetwork/defradb/issues/1318)) +* Only check for events queue on subscription request ([#1326](https://github.com/sourcenetwork/defradb/issues/1326)) +* Remove client Create/UpdateCollection ([#1309](https://github.com/sourcenetwork/defradb/issues/1309)) +* CLI to display specific command usage help ([#1314](https://github.com/sourcenetwork/defradb/issues/1314)) +* Fix P2P collection CLI commands ([#1295](https://github.com/sourcenetwork/defradb/issues/1295)) +* Dont double up badger file path ([#1299](https://github.com/sourcenetwork/defradb/issues/1299)) +* Update immutable package ([#1290](https://github.com/sourcenetwork/defradb/issues/1290)) +* Fix panic on success of Add/RemoveP2PCollections ([#1297](https://github.com/sourcenetwork/defradb/issues/1297)) +* Fix deadlock on memory-datastore Close ([#1273](https://github.com/sourcenetwork/defradb/issues/1273)) +* Determine if query is introspection query ([#1255](https://github.com/sourcenetwork/defradb/issues/1255)) +* Allow newly added fields to sync via p2p ([#1226](https://github.com/sourcenetwork/defradb/issues/1226)) +* Expose `ExplainEnum` in the GQL schema ([#1204](https://github.com/sourcenetwork/defradb/issues/1204)) +* Resolve aggregates' mapping with deep nested subtypes ([#1175](https://github.com/sourcenetwork/defradb/issues/1175)) +* Make sort stable and handle nil comparison ([#1094](https://github.com/sourcenetwork/defradb/issues/1094)) +* Change successful schema add status to 200 ([#1106](https://github.com/sourcenetwork/defradb/issues/1106)) +* Add delay in P2P test util execution ([#1093](https://github.com/sourcenetwork/defradb/issues/1093)) +* Ensure errors test don't hard expect folder name ([#1072](https://github.com/sourcenetwork/defradb/issues/1072)) +* Remove potential P2P deadlock ([#1056](https://github.com/sourcenetwork/defradb/issues/1056)) +* Rework the P2P integration tests ([#989](https://github.com/sourcenetwork/defradb/issues/989)) +* Improve DAG sync with highly concurrent updates ([#1031](https://github.com/sourcenetwork/defradb/issues/1031)) + +### Documentation + +* Update docs for the v0.5 release ([#1320](https://github.com/sourcenetwork/defradb/issues/1320)) +* Document client interfaces in client/db.go ([#1305](https://github.com/sourcenetwork/defradb/issues/1305)) +* Document client Description types ([#1307](https://github.com/sourcenetwork/defradb/issues/1307)) +* Improve security policy ([#1240](https://github.com/sourcenetwork/defradb/issues/1240)) +* Add security disclosure policy ([#1194](https://github.com/sourcenetwork/defradb/issues/1194)) +* Correct commits query example in readme ([#1172](https://github.com/sourcenetwork/defradb/issues/1172)) + +### Refactoring + +* Improve p2p collection operations on peer ([#1286](https://github.com/sourcenetwork/defradb/issues/1286)) +* Migrate gql introspection tests to new framework ([#1211](https://github.com/sourcenetwork/defradb/issues/1211)) +* Reorganise client transaction related interfaces ([#1180](https://github.com/sourcenetwork/defradb/issues/1180)) +* Config-local viper, rootdir, and logger parsing ([#1132](https://github.com/sourcenetwork/defradb/issues/1132)) +* Migrate mutation-relation tests to new framework ([#1109](https://github.com/sourcenetwork/defradb/issues/1109)) +* Rework integration test framework ([#1089](https://github.com/sourcenetwork/defradb/issues/1089)) +* Generate gql types using col. desc ([#1080](https://github.com/sourcenetwork/defradb/issues/1080)) +* Extract config errors to dedicated file ([#1107](https://github.com/sourcenetwork/defradb/issues/1107)) +* Change terminology from query to request ([#1054](https://github.com/sourcenetwork/defradb/issues/1054)) +* Allow db keys to handle multiple schema versions ([#1026](https://github.com/sourcenetwork/defradb/issues/1026)) +* Extract query schema errors to dedicated file ([#1037](https://github.com/sourcenetwork/defradb/issues/1037)) +* Extract planner errors to dedicated file ([#1034](https://github.com/sourcenetwork/defradb/issues/1034)) +* Extract query parser errors to dedicated file ([#1035](https://github.com/sourcenetwork/defradb/issues/1035)) + +### Testing + +* Remove test reference to DEFRA_ROOTDIR env var ([#1328](https://github.com/sourcenetwork/defradb/issues/1328)) +* Expand tests for Peer subscribe actions ([#1287](https://github.com/sourcenetwork/defradb/issues/1287)) +* Fix flaky TestCloseThroughContext test ([#1265](https://github.com/sourcenetwork/defradb/issues/1265)) +* Add gql introspection tests for patch schema ([#1219](https://github.com/sourcenetwork/defradb/issues/1219)) +* Explicitly state change detector split for test ([#1228](https://github.com/sourcenetwork/defradb/issues/1228)) +* Add test for successful one-one create mutation ([#1215](https://github.com/sourcenetwork/defradb/issues/1215)) +* Ensure that all databases are always closed on exit ([#1187](https://github.com/sourcenetwork/defradb/issues/1187)) +* Add P2P tests for Schema Update adding field ([#1182](https://github.com/sourcenetwork/defradb/issues/1182)) +* Migrate P2P/state tests to new framework ([#1160](https://github.com/sourcenetwork/defradb/issues/1160)) +* Remove sleep from subscription tests ([#1156](https://github.com/sourcenetwork/defradb/issues/1156)) +* Fetch documents on test execution start ([#1163](https://github.com/sourcenetwork/defradb/issues/1163)) +* Introduce basic testing for the `version` module ([#1111](https://github.com/sourcenetwork/defradb/issues/1111)) +* Boost test coverage for collection_update ([#1050](https://github.com/sourcenetwork/defradb/issues/1050)) +* Wait between P2P update retry attempts ([#1052](https://github.com/sourcenetwork/defradb/issues/1052)) +* Exclude auto-generated protobuf files from codecov ([#1048](https://github.com/sourcenetwork/defradb/issues/1048)) +* Add P2P tests for relational docs ([#1042](https://github.com/sourcenetwork/defradb/issues/1042)) + +### Continuous integration + +* Add workflow that builds DefraDB AMI upon tag push ([#1304](https://github.com/sourcenetwork/defradb/issues/1304)) +* Allow PR title to end with a capital letter ([#1291](https://github.com/sourcenetwork/defradb/issues/1291)) +* Changes for `dependabot` to be well-behaved ([#1165](https://github.com/sourcenetwork/defradb/issues/1165)) +* Skip benchmarks for dependabot ([#1144](https://github.com/sourcenetwork/defradb/issues/1144)) +* Add workflow to ensure deps build properly ([#1078](https://github.com/sourcenetwork/defradb/issues/1078)) +* Runner and Builder Containerfiles ([#951](https://github.com/sourcenetwork/defradb/issues/951)) +* Fix go-header linter rule to be any year ([#1021](https://github.com/sourcenetwork/defradb/issues/1021)) + +### Chore + +* Add Islam as contributor ([#1302](https://github.com/sourcenetwork/defradb/issues/1302)) +* Update go-libp2p to 0.26.4 ([#1257](https://github.com/sourcenetwork/defradb/issues/1257)) +* Improve the test coverage of datastore ([#1203](https://github.com/sourcenetwork/defradb/issues/1203)) +* Add issue and discussion templates ([#1193](https://github.com/sourcenetwork/defradb/issues/1193)) +* Bump libp2p/go-libp2p-kad-dht from 0.21.0 to 0.21.1 ([#1146](https://github.com/sourcenetwork/defradb/issues/1146)) +* Enable dependabot ([#1120](https://github.com/sourcenetwork/defradb/issues/1120)) +* Update `opentelemetry` dependencies ([#1114](https://github.com/sourcenetwork/defradb/issues/1114)) +* Update dependencies including go-ipfs ([#1112](https://github.com/sourcenetwork/defradb/issues/1112)) +* Bump to GoLang v1.19 ([#818](https://github.com/sourcenetwork/defradb/issues/818)) +* Remove versionedScan node ([#1049](https://github.com/sourcenetwork/defradb/issues/1049)) + +### Bot + +* Bump github.com/multiformats/go-multiaddr from 0.8.0 to 0.9.0 ([#1277](https://github.com/sourcenetwork/defradb/issues/1277)) +* Bump google.golang.org/grpc from 1.53.0 to 1.54.0 ([#1233](https://github.com/sourcenetwork/defradb/issues/1233)) +* Bump github.com/multiformats/go-multibase from 0.1.1 to 0.2.0 ([#1230](https://github.com/sourcenetwork/defradb/issues/1230)) +* Bump github.com/ipfs/go-libipfs from 0.6.2 to 0.7.0 ([#1231](https://github.com/sourcenetwork/defradb/issues/1231)) +* Bump github.com/ipfs/go-cid from 0.3.2 to 0.4.0 ([#1200](https://github.com/sourcenetwork/defradb/issues/1200)) +* Bump github.com/ipfs/go-ipfs-blockstore from 1.2.0 to 1.3.0 ([#1199](https://github.com/sourcenetwork/defradb/issues/1199)) +* Bump github.com/stretchr/testify from 1.8.1 to 1.8.2 ([#1198](https://github.com/sourcenetwork/defradb/issues/1198)) +* Bump github.com/ipfs/go-libipfs from 0.6.1 to 0.6.2 ([#1201](https://github.com/sourcenetwork/defradb/issues/1201)) +* Bump golang.org/x/crypto from 0.6.0 to 0.7.0 ([#1197](https://github.com/sourcenetwork/defradb/issues/1197)) +* Bump libp2p/go-libp2p-gostream from 0.5.0 to 0.6.0 ([#1152](https://github.com/sourcenetwork/defradb/issues/1152)) +* Bump github.com/ipfs/go-libipfs from 0.5.0 to 0.6.1 ([#1166](https://github.com/sourcenetwork/defradb/issues/1166)) +* Bump github.com/ugorji/go/codec from 1.2.9 to 1.2.11 ([#1173](https://github.com/sourcenetwork/defradb/issues/1173)) +* Bump github.com/libp2p/go-libp2p-pubsub from 0.9.0 to 0.9.3 ([#1183](https://github.com/sourcenetwork/defradb/issues/1183)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.5.1.md b/docs/defradb/release notes/v0.5.1.md new file mode 100644 index 0000000..f204c56 --- /dev/null +++ b/docs/defradb/release notes/v0.5.1.md @@ -0,0 +1,91 @@ +--- +sidebar_position: 51 +--- + +# v0.5.1 + +> 2023-05-16 + +### Features + +* Add collection response information on creation ([#1499](https://github.com/sourcenetwork/defradb/issues/1499)) +* CLI client request from file ([#1503](https://github.com/sourcenetwork/defradb/issues/1503)) +* Add commits fieldName and fieldId fields ([#1451](https://github.com/sourcenetwork/defradb/issues/1451)) +* Add allowed origins config ([#1408](https://github.com/sourcenetwork/defradb/issues/1408)) +* Add descriptions to all system defined GQL stuff ([#1387](https://github.com/sourcenetwork/defradb/issues/1387)) +* Strongly type Request.Errors ([#1364](https://github.com/sourcenetwork/defradb/issues/1364)) + +### Fixes + +* Skip new test packages in change detector ([#1495](https://github.com/sourcenetwork/defradb/issues/1495)) +* Make nested joins work correctly from primary direction ([#1491](https://github.com/sourcenetwork/defradb/issues/1491)) +* Add reconnection to known peers ([#1482](https://github.com/sourcenetwork/defradb/issues/1482)) +* Rename commit field input arg to fieldId ([#1460](https://github.com/sourcenetwork/defradb/issues/1460)) +* Reference collectionID in p2p readme ([#1466](https://github.com/sourcenetwork/defradb/issues/1466)) +* Handling SIGTERM in CLI `start` command ([#1459](https://github.com/sourcenetwork/defradb/issues/1459)) +* Update QL documentation link and replicator command ([#1440](https://github.com/sourcenetwork/defradb/issues/1440)) +* Fix typo in readme ([#1419](https://github.com/sourcenetwork/defradb/issues/1419)) +* Limit the size of http request bodies that we handle ([#1405](https://github.com/sourcenetwork/defradb/issues/1405)) +* Improve P2P event handling ([#1388](https://github.com/sourcenetwork/defradb/issues/1388)) +* Serialize DB errors to json in http package ([#1401](https://github.com/sourcenetwork/defradb/issues/1401)) +* Do not commit if errors have been returned ([#1390](https://github.com/sourcenetwork/defradb/issues/1390)) +* Unlock replicator lock before returning error ([#1369](https://github.com/sourcenetwork/defradb/issues/1369)) +* Improve NonNull error message ([#1362](https://github.com/sourcenetwork/defradb/issues/1362)) +* Use ring-buffer for WaitForFoo chans ([#1359](https://github.com/sourcenetwork/defradb/issues/1359)) +* Guarantee event processing order ([#1352](https://github.com/sourcenetwork/defradb/issues/1352)) +* Explain of _group with dockeys filter to be []string ([#1348](https://github.com/sourcenetwork/defradb/issues/1348)) + +### Refactoring + +* Use `int32` for proper gql scalar Int parsing ([#1493](https://github.com/sourcenetwork/defradb/issues/1493)) +* Improve rollback on peer P2P collection error ([#1461](https://github.com/sourcenetwork/defradb/issues/1461)) +* Improve CLI with test suite and builder pattern ([#928](https://github.com/sourcenetwork/defradb/issues/928)) + +### Testing + +* Add DB/Node Restart tests ([#1504](https://github.com/sourcenetwork/defradb/issues/1504)) +* Provide tests for client introspection query ([#1492](https://github.com/sourcenetwork/defradb/issues/1492)) +* Convert explain count tests to new explain setup ([#1488](https://github.com/sourcenetwork/defradb/issues/1488)) +* Convert explain sum tests to new explain setup ([#1489](https://github.com/sourcenetwork/defradb/issues/1489)) +* Convert explain average tests to new explain setup ([#1487](https://github.com/sourcenetwork/defradb/issues/1487)) +* Convert explain top-level tests to new explain setup ([#1480](https://github.com/sourcenetwork/defradb/issues/1480)) +* Convert explain order tests to new explain setup ([#1478](https://github.com/sourcenetwork/defradb/issues/1478)) +* Convert explain join tests to new explain setup ([#1476](https://github.com/sourcenetwork/defradb/issues/1476)) +* Convert explain dagscan tests to new explain setup ([#1474](https://github.com/sourcenetwork/defradb/issues/1474)) +* Add tests to assert schema id order independence ([#1456](https://github.com/sourcenetwork/defradb/issues/1456)) +* Capitalize all integration schema types ([#1445](https://github.com/sourcenetwork/defradb/issues/1445)) +* Convert explain limit tests to new explain setup ([#1446](https://github.com/sourcenetwork/defradb/issues/1446)) +* Improve change detector performance ([#1433](https://github.com/sourcenetwork/defradb/issues/1433)) +* Convert mutation explain tests to new explain setup ([#1416](https://github.com/sourcenetwork/defradb/issues/1416)) +* Convert filter explain tests to new explain setup ([#1380](https://github.com/sourcenetwork/defradb/issues/1380)) +* Retry test doc mutation on transaction conflict ([#1366](https://github.com/sourcenetwork/defradb/issues/1366)) + +### Continuous integration + +* Remove secret ssh key stuff from change detector wf ([#1438](https://github.com/sourcenetwork/defradb/issues/1438)) +* Fix the SSH security issue from AMI scan report ([#1426](https://github.com/sourcenetwork/defradb/issues/1426)) +* Add a separate workflow to run the linter ([#1434](https://github.com/sourcenetwork/defradb/issues/1434)) +* Allow CI to work from forked repo ([#1392](https://github.com/sourcenetwork/defradb/issues/1392)) +* Bump go version within packer for AWS AMI ([#1344](https://github.com/sourcenetwork/defradb/issues/1344)) + +### Chore + +* Enshrine defra logger names ([#1410](https://github.com/sourcenetwork/defradb/issues/1410)) +* Remove some dead code ([#1470](https://github.com/sourcenetwork/defradb/issues/1470)) +* Update graphql-go ([#1422](https://github.com/sourcenetwork/defradb/issues/1422)) +* Improve logging consistency ([#1424](https://github.com/sourcenetwork/defradb/issues/1424)) +* Makefile tests with shorter timeout and common flags ([#1397](https://github.com/sourcenetwork/defradb/issues/1397)) +* Move to gofrs/uuid ([#1396](https://github.com/sourcenetwork/defradb/issues/1396)) +* Move to ipfs boxo ([#1393](https://github.com/sourcenetwork/defradb/issues/1393)) +* Document collection.txn ([#1363](https://github.com/sourcenetwork/defradb/issues/1363)) + +### Bot + +* Bump golang.org/x/crypto from 0.8.0 to 0.9.0 ([#1497](https://github.com/sourcenetwork/defradb/issues/1497)) +* Bump golang.org/x/net from 0.9.0 to 0.10.0 ([#1496](https://github.com/sourcenetwork/defradb/issues/1496)) +* Bump google.golang.org/grpc from 1.54.0 to 1.55.0 ([#1464](https://github.com/sourcenetwork/defradb/issues/1464)) +* Bump github.com/ipfs/boxo from 0.8.0 to 0.8.1 ([#1427](https://github.com/sourcenetwork/defradb/issues/1427)) +* Bump golang.org/x/crypto from 0.7.0 to 0.8.0 ([#1398](https://github.com/sourcenetwork/defradb/issues/1398)) +* Bump github.com/spf13/cobra from 1.6.1 to 1.7.0 ([#1399](https://github.com/sourcenetwork/defradb/issues/1399)) +* Bump github.com/ipfs/go-blockservice from 0.5.0 to 0.5.1 ([#1300](https://github.com/sourcenetwork/defradb/issues/1300)) +* Bump github.com/ipfs/go-cid from 0.4.0 to 0.4.1 ([#1301](https://github.com/sourcenetwork/defradb/issues/1301)) diff --git a/docs/defradb/release notes/v0.6.0.md b/docs/defradb/release notes/v0.6.0.md new file mode 100644 index 0000000..8026fec --- /dev/null +++ b/docs/defradb/release notes/v0.6.0.md @@ -0,0 +1,85 @@ +--- +sidebar_position: 61 +--- + +# v0.6.0 + +> 2023-07-31 + +DefraDB v0.6 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +There are several new and powerful features, important bug fixes, and notable refactors in this release. Some highlight features include: The initial release of our LensVM based schema migration engine powered by WebAssembly ([#1650](https://github.com/sourcenetwork/defradb/issues/1650)), newly embedded DefraDB Playround which includes a bundled GraphQL client and schema manager, and last but not least a relation field (type_id) alias to improve the developer experience ([#1609](https://github.com/sourcenetwork/defradb/issues/1609)). + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.5.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.gg/w7jYQVJ/. + +### Features + +* Add `_not` operator ([#1631](https://github.com/sourcenetwork/defradb/issues/1631)) +* Schema list API ([#1625](https://github.com/sourcenetwork/defradb/issues/1625)) +* Add simple data import and export ([#1630](https://github.com/sourcenetwork/defradb/issues/1630)) +* Playground ([#1575](https://github.com/sourcenetwork/defradb/issues/1575)) +* Add schema migration get and set cmds to CLI ([#1650](https://github.com/sourcenetwork/defradb/issues/1650)) +* Allow relation alias on create and update ([#1609](https://github.com/sourcenetwork/defradb/issues/1609)) +* Make fetcher calculate docFetches and fieldFetches ([#1713](https://github.com/sourcenetwork/defradb/issues/1713)) +* Add lens migration engine to defra ([#1564](https://github.com/sourcenetwork/defradb/issues/1564)) +* Add `_keys` attribute to `selectNode` simple explain ([#1546](https://github.com/sourcenetwork/defradb/issues/1546)) +* CLI commands for secondary indexes ([#1595](https://github.com/sourcenetwork/defradb/issues/1595)) +* Add alias to `groupBy` related object ([#1579](https://github.com/sourcenetwork/defradb/issues/1579)) +* Non-unique secondary index (no querying) ([#1450](https://github.com/sourcenetwork/defradb/issues/1450)) +* Add ability to explain-debug all nodes ([#1563](https://github.com/sourcenetwork/defradb/issues/1563)) +* Include dockey in doc exists err ([#1558](https://github.com/sourcenetwork/defradb/issues/1558)) + +### Fixes + +* Better wait in CLI integration test ([#1415](https://github.com/sourcenetwork/defradb/issues/1415)) +* Return error when relation is not defined on both types ([#1647](https://github.com/sourcenetwork/defradb/issues/1647)) +* Change `core.DocumentMapping` to pointer ([#1528](https://github.com/sourcenetwork/defradb/issues/1528)) +* Fix invalid (badger) datastore state ([#1685](https://github.com/sourcenetwork/defradb/issues/1685)) +* Discard index and subscription implicit transactions ([#1715](https://github.com/sourcenetwork/defradb/issues/1715)) +* Remove duplicated `peers` in peerstore prefix ([#1678](https://github.com/sourcenetwork/defradb/issues/1678)) +* Return errors from typeJoinOne ([#1716](https://github.com/sourcenetwork/defradb/issues/1716)) +* Document change detector breaking change ([#1531](https://github.com/sourcenetwork/defradb/issues/1531)) +* Standardise `schema migration` CLI errors ([#1682](https://github.com/sourcenetwork/defradb/issues/1682)) +* Introspection OrderArg returns null inputFields ([#1633](https://github.com/sourcenetwork/defradb/issues/1633)) +* Avoid duplicated requestable fields ([#1621](https://github.com/sourcenetwork/defradb/issues/1621)) +* Normalize int field kind ([#1619](https://github.com/sourcenetwork/defradb/issues/1619)) +* Change the WriteSyncer to use lock when piping ([#1608](https://github.com/sourcenetwork/defradb/issues/1608)) +* Filter splitting and rendering for related types ([#1541](https://github.com/sourcenetwork/defradb/issues/1541)) + +### Documentation + +* Improve CLI command documentation ([#1505](https://github.com/sourcenetwork/defradb/issues/1505)) + +### Refactoring + +* Schema list output to include schemaVersionID ([#1706](https://github.com/sourcenetwork/defradb/issues/1706)) +* Reuse lens wasm modules ([#1641](https://github.com/sourcenetwork/defradb/issues/1641)) +* Remove redundant txn param from fetcher start ([#1635](https://github.com/sourcenetwork/defradb/issues/1635)) +* Remove first CRDT byte from field encoded values ([#1622](https://github.com/sourcenetwork/defradb/issues/1622)) +* Merge `node` into `net` and improve coverage ([#1593](https://github.com/sourcenetwork/defradb/issues/1593)) +* Fetcher filter and field optimization ([#1500](https://github.com/sourcenetwork/defradb/issues/1500)) + +### Testing + +* Rework transaction test framework capabilities ([#1603](https://github.com/sourcenetwork/defradb/issues/1603)) +* Expand backup integration tests ([#1699](https://github.com/sourcenetwork/defradb/issues/1699)) +* Disable test ([#1675](https://github.com/sourcenetwork/defradb/issues/1675)) +* Add tests for 1-1 group by id ([#1655](https://github.com/sourcenetwork/defradb/issues/1655)) +* Remove CLI tests from make test ([#1643](https://github.com/sourcenetwork/defradb/issues/1643)) +* Bundle test state into single var ([#1645](https://github.com/sourcenetwork/defradb/issues/1645)) +* Convert explain group tests to new explain setup ([#1537](https://github.com/sourcenetwork/defradb/issues/1537)) +* Add tests for foo_id field name clashes ([#1521](https://github.com/sourcenetwork/defradb/issues/1521)) +* Resume wait correctly following test node restart ([#1515](https://github.com/sourcenetwork/defradb/issues/1515)) +* Require no errors when none expected ([#1509](https://github.com/sourcenetwork/defradb/issues/1509)) + +### Continuous integration + +* Add workflows to push, pull, and validate docker images ([#1676](https://github.com/sourcenetwork/defradb/issues/1676)) +* Build mocks using make ([#1612](https://github.com/sourcenetwork/defradb/issues/1612)) +* Fix terraform plan and merge AMI build + deploy workflow ([#1514](https://github.com/sourcenetwork/defradb/issues/1514)) +* Reconfigure CodeCov action to ensure stability ([#1414](https://github.com/sourcenetwork/defradb/issues/1414)) + +### Chore + +* Bump to GoLang v1.20 ([#1689](https://github.com/sourcenetwork/defradb/issues/1689)) +* Update to ipfs boxo 0.10.0 ([#1573](https://github.com/sourcenetwork/defradb/issues/1573)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.7.0.md b/docs/defradb/release notes/v0.7.0.md new file mode 100644 index 0000000..00ea8d5 --- /dev/null +++ b/docs/defradb/release notes/v0.7.0.md @@ -0,0 +1,74 @@ +--- +sidebar_position: 70 +--- +# v0.7.0 + +> 2023-09-18 + +DefraDB v0.7 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +This release has focused on robustness, testing, and schema management. Some highlight new features include notable expansions to the expressiveness of schema migrations. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.6.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.gg/w7jYQVJ/. + +### Features + +* Allow field indexing by name in PatchSchema ([#1810](https://github.com/sourcenetwork/defradb/issues/1810)) +* Auto-create relation id fields via PatchSchema ([#1807](https://github.com/sourcenetwork/defradb/issues/1807)) +* Support PatchSchema relational field kind substitution ([#1777](https://github.com/sourcenetwork/defradb/issues/1777)) +* Add support for adding of relational fields ([#1766](https://github.com/sourcenetwork/defradb/issues/1766)) +* Enable downgrading of documents via Lens inverses ([#1721](https://github.com/sourcenetwork/defradb/issues/1721)) + +### Fixes + +* Correctly handle serialisation of nil field values ([#1872](https://github.com/sourcenetwork/defradb/issues/1872)) +* Compound filter operators with relations ([#1855](https://github.com/sourcenetwork/defradb/issues/1855)) +* Only update updated fields via update requests ([#1817](https://github.com/sourcenetwork/defradb/issues/1817)) +* Error when saving a deleted document ([#1806](https://github.com/sourcenetwork/defradb/issues/1806)) +* Prevent multiple docs from being linked in one one ([#1790](https://github.com/sourcenetwork/defradb/issues/1790)) +* Handle the querying of secondary relation id fields ([#1768](https://github.com/sourcenetwork/defradb/issues/1768)) +* Improve the way migrations handle transactions ([#1737](https://github.com/sourcenetwork/defradb/issues/1737)) + +### Tooling + +* Add Akash deployment configuration ([#1736](https://github.com/sourcenetwork/defradb/issues/1736)) + +### Refactoring + +* HTTP client interface ([#1776](https://github.com/sourcenetwork/defradb/issues/1776)) +* Simplify fetcher interface ([#1746](https://github.com/sourcenetwork/defradb/issues/1746)) + +### Testing + +* Convert and move out of place explain tests ([#1878](https://github.com/sourcenetwork/defradb/issues/1878)) +* Update mutation tests to make use of mutation system ([#1853](https://github.com/sourcenetwork/defradb/issues/1853)) +* Test top level agg. with compound relational filter ([#1870](https://github.com/sourcenetwork/defradb/issues/1870)) +* Skip unsupported mutation types at test level ([#1850](https://github.com/sourcenetwork/defradb/issues/1850)) +* Extend mutation tests with col.Update and Create ([#1838](https://github.com/sourcenetwork/defradb/issues/1838)) +* Add tests for multiple one-one joins ([#1793](https://github.com/sourcenetwork/defradb/issues/1793)) + +### Chore + +* Update Badger version to v4 ([#1740](https://github.com/sourcenetwork/defradb/issues/1740)) +* Update go-libp2p to 0.29.2 ([#1780](https://github.com/sourcenetwork/defradb/issues/1780)) +* Bump golangci-lint to v1.54 ([#1881](https://github.com/sourcenetwork/defradb/issues/1881)) +* Bump go.opentelemetry.io/otel/metric from 1.17.0 to 1.18.0 ([#1890](https://github.com/sourcenetwork/defradb/issues/1890)) +* Bump [@tanstack](https://github.com/tanstack)/react-query from 4.35.0 to 4.35.3 in /playground ([#1876](https://github.com/sourcenetwork/defradb/issues/1876)) +* Bump [@typescript](https://github.com/typescript)-eslint/eslint-plugin from 6.5.0 to 6.7.0 in /playground ([#1874](https://github.com/sourcenetwork/defradb/issues/1874)) +* Bump [@typescript](https://github.com/typescript)-eslint/parser from 6.6.0 to 6.7.0 in /playground ([#1875](https://github.com/sourcenetwork/defradb/issues/1875)) +* Combined PRs 2023-09-14 ([#1873](https://github.com/sourcenetwork/defradb/issues/1873)) +* Bump [@typescript](https://github.com/typescript)-eslint/eslint-plugin from 6.4.0 to 6.5.0 in /playground ([#1827](https://github.com/sourcenetwork/defradb/issues/1827)) +* Bump go.opentelemetry.io/otel/sdk/metric from 0.39.0 to 0.40.0 ([#1829](https://github.com/sourcenetwork/defradb/issues/1829)) +* Bump github.com/ipfs/go-block-format from 0.1.2 to 0.2.0 ([#1819](https://github.com/sourcenetwork/defradb/issues/1819)) +* Combined PRs ([#1826](https://github.com/sourcenetwork/defradb/issues/1826)) +* Bump [@typescript](https://github.com/typescript)-eslint/parser from 6.4.0 to 6.4.1 in /playground ([#1804](https://github.com/sourcenetwork/defradb/issues/1804)) +* Combined PRs ([#1803](https://github.com/sourcenetwork/defradb/issues/1803)) +* Combined PRs ([#1791](https://github.com/sourcenetwork/defradb/issues/1791)) +* Combined PRs ([#1778](https://github.com/sourcenetwork/defradb/issues/1778)) +* Bump dependencies ([#1761](https://github.com/sourcenetwork/defradb/issues/1761)) +* Bump vite from 4.3.9 to 4.4.8 in /playground ([#1748](https://github.com/sourcenetwork/defradb/issues/1748)) +* Bump graphiql from 3.0.4 to 3.0.5 in /playground ([#1730](https://github.com/sourcenetwork/defradb/issues/1730)) +* Combined bumps of dependencies under /playground ([#1744](https://github.com/sourcenetwork/defradb/issues/1744)) +* Bump github.com/ipfs/boxo from 0.10.2 to 0.11.0 ([#1726](https://github.com/sourcenetwork/defradb/issues/1726)) +* Bump github.com/libp2p/go-libp2p-kad-dht from 0.24.2 to 0.24.3 ([#1724](https://github.com/sourcenetwork/defradb/issues/1724)) +* Bump google.golang.org/grpc from 1.56.2 to 1.57.0 ([#1725](https://github.com/sourcenetwork/defradb/issues/1725)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.8.0.md b/docs/defradb/release notes/v0.8.0.md new file mode 100644 index 0000000..9ff4e85 --- /dev/null +++ b/docs/defradb/release notes/v0.8.0.md @@ -0,0 +1,75 @@ +--- +sidebar_position: 80 +--- +# v0.8.0 + +> 2023-11-14 + +DefraDB v0.8 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.7.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.gg/w7jYQVJ/. + +### Features + +* Add means to fetch schema ([#2006](https://github.com/sourcenetwork/defradb/issues/2006)) +* Rename Schema.SchemaID to Schema.Root ([#2005](https://github.com/sourcenetwork/defradb/issues/2005)) +* Enable playground in Docker build ([#1986](https://github.com/sourcenetwork/defradb/issues/1986)) +* Change GetCollectionBySchemaFoo funcs to return many ([#1984](https://github.com/sourcenetwork/defradb/issues/1984)) +* Add Swagger UI to playground ([#1979](https://github.com/sourcenetwork/defradb/issues/1979)) +* Add OpenAPI route ([#1960](https://github.com/sourcenetwork/defradb/issues/1960)) +* Remove CollectionDescription.Schema ([#1965](https://github.com/sourcenetwork/defradb/issues/1965)) +* Remove collection from patch schema ([#1957](https://github.com/sourcenetwork/defradb/issues/1957)) +* Make queries utilise secondary indexes ([#1925](https://github.com/sourcenetwork/defradb/issues/1925)) +* Allow setting of default schema version ([#1888](https://github.com/sourcenetwork/defradb/issues/1888)) +* Add CCIP Support ([#1896](https://github.com/sourcenetwork/defradb/issues/1896)) + +### Fixes + +* Fix test module relying on closed memory leak ([#2037](https://github.com/sourcenetwork/defradb/issues/2037)) +* Make return type for FieldKind_INT an int64 ([#1982](https://github.com/sourcenetwork/defradb/issues/1982)) +* Node private key requires data directory ([#1938](https://github.com/sourcenetwork/defradb/issues/1938)) +* Remove collection name from schema ID generation ([#1920](https://github.com/sourcenetwork/defradb/issues/1920)) +* Infinite loop when updating one-one relation ([#1915](https://github.com/sourcenetwork/defradb/issues/1915)) + +### Refactoring + +* CRDT merge direction ([#2016](https://github.com/sourcenetwork/defradb/issues/2016)) +* Reorganise collection description storage ([#1988](https://github.com/sourcenetwork/defradb/issues/1988)) +* Add peerstore to multistore ([#1980](https://github.com/sourcenetwork/defradb/issues/1980)) +* P2P client interface ([#1924](https://github.com/sourcenetwork/defradb/issues/1924)) +* Deprecate CollectionDescription.Schema ([#1939](https://github.com/sourcenetwork/defradb/issues/1939)) +* Remove net GRPC API ([#1927](https://github.com/sourcenetwork/defradb/issues/1927)) +* CLI client interface ([#1839](https://github.com/sourcenetwork/defradb/issues/1839)) + +### Continuous integration + +* Add goreleaser workflow ([#2040](https://github.com/sourcenetwork/defradb/issues/2040)) +* Add mac test runner ([#2035](https://github.com/sourcenetwork/defradb/issues/2035)) +* Parallelize change detector ([#1871](https://github.com/sourcenetwork/defradb/issues/1871)) + +### Chore + +* Update dependencies ([#2044](https://github.com/sourcenetwork/defradb/issues/2044)) + +### Bot + +* Bump [@typescript](https://github.com/typescript)-eslint/parser from 6.10.0 to 6.11.0 in /playground ([#2053](https://github.com/sourcenetwork/defradb/issues/2053)) +* Update dependencies (bulk dependabot PRs) 13-11-2023 ([#2052](https://github.com/sourcenetwork/defradb/issues/2052)) +* Bump axios from 1.5.1 to 1.6.1 in /playground ([#2041](https://github.com/sourcenetwork/defradb/issues/2041)) +* Bump [@typescript](https://github.com/typescript)-eslint/eslint-plugin from 6.9.1 to 6.10.0 in /playground ([#2042](https://github.com/sourcenetwork/defradb/issues/2042)) +* Bump [@vitejs](https://github.com/vitejs)/plugin-react-swc from 3.4.0 to 3.4.1 in /playground ([#2022](https://github.com/sourcenetwork/defradb/issues/2022)) +* Update dependencies (bulk dependabot PRs) 08-11-2023 ([#2038](https://github.com/sourcenetwork/defradb/issues/2038)) +* Update dependencies (bulk dependabot PRs) 30-10-2023 ([#2015](https://github.com/sourcenetwork/defradb/issues/2015)) +* Bump eslint-plugin and parser from 6.8.0 to 6.9.0 in /playground ([#2000](https://github.com/sourcenetwork/defradb/issues/2000)) +* Update dependencies (bulk dependabot PRs) 16-10-2023 ([#1998](https://github.com/sourcenetwork/defradb/issues/1998)) +* Update dependencies (bulk dependabot PRs) 16-10-2023 ([#1976](https://github.com/sourcenetwork/defradb/issues/1976)) +* Bump golang.org/x/net from 0.16.0 to 0.17.0 ([#1961](https://github.com/sourcenetwork/defradb/issues/1961)) +* Bump [@types](https://github.com/types)/react-dom from 18.2.11 to 18.2.12 in /playground ([#1952](https://github.com/sourcenetwork/defradb/issues/1952)) +* Bump [@typescript](https://github.com/typescript)-eslint/eslint-plugin from 6.7.4 to 6.7.5 in /playground ([#1953](https://github.com/sourcenetwork/defradb/issues/1953)) +* Bump combined dependencies 09-10-2023 ([#1951](https://github.com/sourcenetwork/defradb/issues/1951)) +* Bump [@types](https://github.com/types)/react from 18.2.24 to 18.2.25 in /playground ([#1932](https://github.com/sourcenetwork/defradb/issues/1932)) +* Bump [@typescript](https://github.com/typescript)-eslint/parser from 6.7.3 to 6.7.4 in /playground ([#1933](https://github.com/sourcenetwork/defradb/issues/1933)) +* Bump [@vitejs](https://github.com/vitejs)/plugin-react-swc from 3.3.2 to 3.4.0 in /playground ([#1904](https://github.com/sourcenetwork/defradb/issues/1904)) +* Bump combined dependencies 19-09-2023 ([#1931](https://github.com/sourcenetwork/defradb/issues/1931)) +* Bump graphql from 16.8.0 to 16.8.1 in /playground ([#1901](https://github.com/sourcenetwork/defradb/issues/1901)) +* Update combined dependabot PRs 19-09-2023 ([#1898](https://github.com/sourcenetwork/defradb/issues/1898)) \ No newline at end of file diff --git a/docs/defradb/release notes/v0.9.0.md b/docs/defradb/release notes/v0.9.0.md new file mode 100644 index 0000000..71a6088 --- /dev/null +++ b/docs/defradb/release notes/v0.9.0.md @@ -0,0 +1,78 @@ +--- +sidebar_position: 90 +--- +# v0.9.0 + +> 2024-01-18 + +DefraDB v0.9 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. + +To get a full outline of the changes, we invite you to review the official changelog below. This release does include a Breaking Change to existing v0.8.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.gg/w7jYQVJ/. + +### Features + +* Mutation typed input ([#2167](https://github.com/sourcenetwork/defradb/issues/2167)) +* Add PN Counter CRDT type ([#2119](https://github.com/sourcenetwork/defradb/issues/2119)) +* Allow users to add Views ([#2114](https://github.com/sourcenetwork/defradb/issues/2114)) +* Add unique secondary index ([#2131](https://github.com/sourcenetwork/defradb/issues/2131)) +* New cmd for docs auto generation ([#2096](https://github.com/sourcenetwork/defradb/issues/2096)) +* Add blob scalar type ([#2091](https://github.com/sourcenetwork/defradb/issues/2091)) + +### Fixes + +* Add entropy to counter CRDT type updates ([#2186](https://github.com/sourcenetwork/defradb/issues/2186)) +* Handle multiple nil values on unique indexed fields ([#2178](https://github.com/sourcenetwork/defradb/issues/2178)) +* Filtering on unique index if there is no match ([#2177](https://github.com/sourcenetwork/defradb/issues/2177)) + +### Performance + +* Switch LensVM to wasmtime runtime ([#2030](https://github.com/sourcenetwork/defradb/issues/2030)) + +### Refactoring + +* Add strong typing to document creation ([#2161](https://github.com/sourcenetwork/defradb/issues/2161)) +* Rename key,id,dockey to docID terminology ([#1749](https://github.com/sourcenetwork/defradb/issues/1749)) +* Simplify Merkle CRDT workflow ([#2111](https://github.com/sourcenetwork/defradb/issues/2111)) + +### Testing + +* Add auto-doc generation ([#2051](https://github.com/sourcenetwork/defradb/issues/2051)) + +### Continuous integration + +* Add windows test runner ([#2033](https://github.com/sourcenetwork/defradb/issues/2033)) + +### Chore + +* Update Lens to v0.5 ([#2083](https://github.com/sourcenetwork/defradb/issues/2083)) + +### Bot + +* Bump [@types](https://github.com/types)/react from 18.2.47 to 18.2.48 in /playground ([#2213](https://github.com/sourcenetwork/defradb/issues/2213)) +* Bump [@typescript](https://github.com/typescript)-eslint/eslint-plugin from 6.18.0 to 6.18.1 in /playground ([#2215](https://github.com/sourcenetwork/defradb/issues/2215)) +* Update dependencies (bulk dependabot PRs) 15-01-2024 ([#2217](https://github.com/sourcenetwork/defradb/issues/2217)) +* Bump follow-redirects from 1.15.3 to 1.15.4 in /playground ([#2181](https://github.com/sourcenetwork/defradb/issues/2181)) +* Bump github.com/getkin/kin-openapi from 0.120.0 to 0.122.0 ([#2097](https://github.com/sourcenetwork/defradb/issues/2097)) +* Update dependencies (bulk dependabot PRs) 08-01-2024 ([#2173](https://github.com/sourcenetwork/defradb/issues/2173)) +* Bump github.com/bits-and-blooms/bitset from 1.12.0 to 1.13.0 ([#2160](https://github.com/sourcenetwork/defradb/issues/2160)) +* Bump [@types](https://github.com/types)/react from 18.2.45 to 18.2.46 in /playground ([#2159](https://github.com/sourcenetwork/defradb/issues/2159)) +* Bump [@typescript](https://github.com/typescript)-eslint/parser from 6.15.0 to 6.16.0 in /playground ([#2156](https://github.com/sourcenetwork/defradb/issues/2156)) +* Bump [@typescript](https://github.com/typescript)-eslint/eslint-plugin from 6.15.0 to 6.16.0 in /playground ([#2155](https://github.com/sourcenetwork/defradb/issues/2155)) +* Update dependencies (bulk dependabot PRs) 27-12-2023 ([#2154](https://github.com/sourcenetwork/defradb/issues/2154)) +* Bump github.com/spf13/viper from 1.17.0 to 1.18.2 ([#2145](https://github.com/sourcenetwork/defradb/issues/2145)) +* Bump golang.org/x/crypto from 0.16.0 to 0.17.0 ([#2144](https://github.com/sourcenetwork/defradb/issues/2144)) +* Update dependencies (bulk dependabot PRs) 18-12-2023 ([#2142](https://github.com/sourcenetwork/defradb/issues/2142)) +* Bump [@typescript](https://github.com/typescript)-eslint/parser from 6.13.2 to 6.14.0 in /playground ([#2136](https://github.com/sourcenetwork/defradb/issues/2136)) +* Bump [@types](https://github.com/types)/react from 18.2.43 to 18.2.45 in /playground ([#2134](https://github.com/sourcenetwork/defradb/issues/2134)) +* Bump vite from 5.0.7 to 5.0.10 in /playground ([#2135](https://github.com/sourcenetwork/defradb/issues/2135)) +* Update dependencies (bulk dependabot PRs) 04-12-2023 ([#2133](https://github.com/sourcenetwork/defradb/issues/2133)) +* Bump [@typescript](https://github.com/typescript)-eslint/eslint-plugin from 6.13.1 to 6.13.2 in /playground ([#2109](https://github.com/sourcenetwork/defradb/issues/2109)) +* Bump vite from 5.0.2 to 5.0.5 in /playground ([#2112](https://github.com/sourcenetwork/defradb/issues/2112)) +* Bump [@types](https://github.com/types)/react from 18.2.41 to 18.2.42 in /playground ([#2108](https://github.com/sourcenetwork/defradb/issues/2108)) +* Update dependencies (bulk dependabot PRs) 04-12-2023 ([#2107](https://github.com/sourcenetwork/defradb/issues/2107)) +* Bump [@types](https://github.com/types)/react from 18.2.38 to 18.2.39 in /playground ([#2086](https://github.com/sourcenetwork/defradb/issues/2086)) +* Bump [@typescript](https://github.com/typescript)-eslint/parser from 6.12.0 to 6.13.0 in /playground ([#2085](https://github.com/sourcenetwork/defradb/issues/2085)) +* Update dependencies (bulk dependabot PRs) 27-11-2023 ([#2081](https://github.com/sourcenetwork/defradb/issues/2081)) +* Bump swagger-ui-react from 5.10.0 to 5.10.3 in /playground ([#2067](https://github.com/sourcenetwork/defradb/issues/2067)) +* Bump [@typescript](https://github.com/typescript)-eslint/eslint-plugin from 6.11.0 to 6.12.0 in /playground ([#2068](https://github.com/sourcenetwork/defradb/issues/2068)) +* Update dependencies (bulk dependabot PRs) 20-11-2023 ([#2066](https://github.com/sourcenetwork/defradb/issues/2066)) \ No newline at end of file diff --git a/docs/guides/merkle-crdt.md b/docs/guides/merkle-crdt.md deleted file mode 100644 index 63f3f3f..0000000 --- a/docs/guides/merkle-crdt.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -sidebar_label: Merkle CRDT Guide -sidebar_position: 30 ---- -# A Guide to Merkle CRDTs in DefraDB - -## Overview -Merkle CRDTs are a type of Conflict-free Replicated Data Type (CRDT). They are designed to update or modify independent sets of data without any human intervention, ensuring that updates made by multiple actors are merged without conflicts. The goal of Merkle CRDT is to perform deterministic, automatic data merging and synchronization without any inconsistencies. CRDTs were first formalized in 2011 and have become a useful tool in distributed computing. Merkle CRDTs are a new kind of CRDT that allows data to be merged without conflicts, ensuring that data is deterministically synchronized across multiple actors. This can be useful in a variety of distributed computing applications where data needs to be updated and merged in a consistent and conflict-free manner. - -## Background on Regular CRDTs -Conflict-free Replicated Data Types (CRDTs) are a useful tool in local and offline-first applications. They allow multiple actors or peers to collaborate and update the state of a data structure without worrying about synchronizing that state. CRDTs come in many different forms and can be applied to a variety of data types, such as simple registers, counters, sets, lists, and maps. The key feature of CRDTs is their ability to merge data deterministically, ensuring that all actors eventually reach the same state. - -To achieve this, CRDTs rely on the concept of causality or ordering of events. This determines how the merge algorithm works and ensures that if all events or updates are applied to a data type, the resulting state will be the same for all actors. In distributed systems, however, the concept of time and causality can be more complex than it appears. This is because it is often difficult to determine the relative order of events occurring on different computers in a network. As a result, CRDTs often rely on some sort of clock or a different mechanism for tracking the relative order of events. - -## Need for CRDTs - -It can be difficult to determine the relative order of events occurring on different computers in a network, which is why CRDTs can enable the user to ensure data can be merged without conflicts. For example, consider a situation where two actors, A and B, are making updates to the same data at the same time. If actor A stamps their update with a system time of 2:39:56 PM EST on September 6, 2022, and actor B stamps their update with a system time of 2:40:00 PM, it would look like actor B's update occurred after actor A's. However, system times are not always reliable because they can be easily changed by actors, leading to inconsistencies in the relative order of events. To solve this problem, distributed systems use alternative clocks such as logical clocks or vector clocks to track the causality of events. - - -To track the relative causality of events, CRDTs often rely on clocks such as logical clocks or vector clocks. However, these clocks have limitations when used in high-churn networks with a large number of peers. For example, in a peer-to-peer network with a high rate of churn, logical and vector clocks require additional metadata for each peer that an actor interacts with. This metadata must be constantly maintained for each peer, which can be inefficient if the number of peers is unbounded. Additionally, in high churn environments, the amount of metadata grows linearly with the churn rate, making it infeasible to use these clocks in certain situations. Therefore, existing CRDT clock implementations may not be sufficient for use in high churn networks with an unbounded number of peers. - -## Formalization of Merkle CRDT - -Merkle CRDTs are a type of CRDT that combines traditional CRDTs with a new approach to CRDT clocks called a Merkle clock. This clock allows us to solve the issue of maintaining a constant amount of metadata per peer in a high churn network. Instead of tracking this metadata, we can use the inherent causality of Merkle DAGs (Directed Acyclic Graphs). In these graphs, each node is identified using its content identifiable data (CID) and is embedded in another node. The edges in these graphs are directed, meaning one node points to another, forming a DAG structure. If a node points to another node, the CID of the first node is embedded in the value of the second. The inherent nature of Merkle graphs is the embedded relation of hashing or CIDs from one node to another, providing us with useful properties. - - -To create a Merkle CRDT, we take an existing Merkle clock and embed any CRDT that satisfies the requirements. A CRDT is made up of three components: the data type, the CRDT type (operation-based or state-based), and the semantic type. For our specific implementation, we use delta state based CRDTs with different data types and semantic types for different applications. The formal structure of a CRDT is simple - it consists of a Merkle CRDT outer box containing two inner boxes, a Merkle clock and a regular CRDT. - - - -## Merkle Clock - -Merkle clocks are a type of clock used in distributed systems to solve the issue of tracking metadata for each peer that an actor interacts with. They are based on Merkle DAGs that function like hash chains, similar to a blockchain. These graphs are made up of nodes and edges, where the edges are directed, meaning that one node points to another. The head of a Merkle DAG is the most recent node added to the graph, and the entire graph can be referred to by the CID of the head node. The size of the CID hash does not grow with the number of nodes in the graph, making it a useful tool for high churn networks with a large number of peers. - -The Merkle clock is created by adding an additional metadata field to each node of the Merkel DAG, called the height value, which acts as an incremental counter that increases with each new node added to the system. This allows the Merkle clock to provide a rough sense of causality, meaning that it can determine if one event happened before, at the same time, or after another event. The inherent causality of the Merkel DAG ensures that events are recorded in the correct order, making it a useful tool for tracking changes in a distributed system. - -The embedding of CID into the parent node that produces the hash chain provides a causality guarantee that, for example a B is pointed to by node A, node C is pointed to by node B and so on till the node Z, A had to exist before B, because the value of A is embedded inside B, and B could not exist before A, otherwise it would result in breaking the causality of time because the value of A is embedded inside the value of B which then gets embedded inside the value of C, which means that C has to come after B and so on, all the way till the user gets back to Z. And hence if the user has constructed a Merkel DAG correctly, then A has to happen before B, B has to happen before C, C has to happen before D, all the way until they get to Z. This inherent causality of time with respect to CIDs and Merkel DAG provides the user with a causality-adhering system. - -## Delta State Semantics - -There are two types of Delta State Semantics: Operation-Based CRDTs and State-Based CRDTs. Operation-Based CRDTs use the intent of an operation as the body or content of the message, while State-Based CRDTs use the resulting state as the body or content of the message. Both have their own advantages and disadvantages, and the appropriate choice depends on the specific use case. Operation-Based CRDTs express actions such as setting a value to 10 or incrementing a counter by 4 through the intent of the operation. State-Based CRDTs, on the other hand, include the resulting state in the message. For example, a message to set a value to 10 would include the value 10 as the body or content of the message. - -Operation-Based CRDTs tend to be smaller because their messages only contain the operation being performed, while State-Based CRDTs are larger because their messages contain both the current state and the state being changed. It is important to consider the trade-offs between these two types of Delta State Semantics when choosing which one to use in a given situation. - -Delta State Semantics is an optimization of the State-based CRDTs. While both Operation-based CRDTs and State-based CRDTs have their own pros and cons, Delta State CRDTs offer a hybrid approach that uses the state as the message content, but with the same size as an operation. - -In a Delta State CRDT, the message body includes only the minimum amount, or "delta," necessary to transform the previous state to the target state. For example, if we have a set of nine fruit names, and we want to add a banana to the set, the Delta State CRDT would only include the delta, or the value "banana," rather than expressing the entire set of 10 fruit names as in traditional State-based CRDTs. This is like an operation because it has the size of only one action, but it expresses the difference in state between the previous and target rather than the intent of the action. - - -## Branching and Merging State - - -### Branching of Merkle CRDTs - - -Merkle CRDTs are based on the concept of a Merkle clock, which is in turn based on the idea of a Merkle DAG. The structure of a Merkle DAG allows it to branch and merge at any point, as long as it adheres to the requirement of being a DAG and does not create a recursive loop. - - -Branching in a Merkle CRDT system occurs when two peers make independent changes to a common ancestor node and then share those changes, resulting in two distinct states. Neither of these states is considered the correct or canonical version in a Merkle CRDT system. Instead, both are treated as their own local main copies. From these divergent states, further updates can be made, causing the divergence to increase. For example, if there are 10 nodes in common between the two states, one branch may have five new nodes while the other has six. These branches exist independently of each other, and changes can be made to each branch independently without the need for immediate synchronization. This makes CRDTs useful for local-first or offline-first applications that can operate without network connectivity. The structure of a Merkle DAG, on which a Merkle CRDT is based, naturally supports branching. - -### Merging of Merkle CRDTs - -Merging in a Merkle CRDT system involves bringing two divergent states back together into a single, canonical graph. This is done by adding a new head node, known as a merge node, to the history of the graph. The merge node has two or more previous parents, as opposed to the traditional single parent of most nodes. To merge these states, merge semantics must be applied to the new system. The Merkel clock provides two pieces of information that facilitate this process: the use of a CID for each parent and the ability to go back in time through both branches of the divergent state, parent by parent, before officially merging the state. Each type of CRDT defines its own merge semantics. - - -The process begins by finding a common ancestral node between the two divergent states. Each node in the system includes a height parameter, which is the number of nodes preceding it. This, along with the CID of the ancestral node, is provided to the embedded CRDT's merge system to facilitate the merging process. The Merkel CRDT coordinates the logistics of the Merkle DAG and passes information about the multiple parents of the merge node to the embedded CRDT's merge system, which is responsible for defining the merge semantics. As long as the CRDT and the Merkel DAG are functioning correctly, the resulting Merkel clock will also operate correctly. - diff --git a/docs/intro.md b/docs/intro.md deleted file mode 100644 index f0e7ddb..0000000 --- a/docs/intro.md +++ /dev/null @@ -1,353 +0,0 @@ ---- -sidebar_position: 1 -title: Getting Started -slug: / ---- - -DefraDB is a user-centric database that prioritizes data ownership, personal privacy, and information security. Its data model, powered by the convergence of [MerkleCRDTs](https://arxiv.org/pdf/2004.00107.pdf) and the content-addressability of [IPLD](https://docs.ipld.io/), enables a multi-write-master architecture. It features [DQL](./references/query-specification/query-language-overview.md), a query language compatible with GraphQL but providing extra convenience. By leveraging peer-to-peer networking it can be deployed nimbly in novel topologies. Access control is determined by a relationship-based DSL, supporting document or field-level policies, secured by the SourceHub network. DefraDB is a core part of the [Source technologies](https://source.network/) that enable new paradigms of decentralized data and access-control management, user-centric apps, data trustworthiness, and much more. - -DISCLAIMER: At this early stage, DefraDB does not offer access control or data encryption, and the default configuration exposes the database to the network. The software is provided "as is" and is not guaranteed to be stable, secure, or error-free. We encourage you to experiment with DefraDB and provide feedback, but please do not use it for production purposes until it has been thoroughly tested and developed. - -## Install - -Install `defradb` by [downloading an executable](https://github.com/sourcenetwork/defradb/releases) or building it locally using the [Go toolchain](https://golang.org/): - -```shell -git clone git@github.com:sourcenetwork/defradb.git -cd defradb -make install -``` - -In the following sections, we assume that `defradb` is included in your `PATH`. If you installed it with the Go toolchain, use: - -```shell -export PATH=$PATH:$(go env GOPATH)/bin -``` - -We recommend experimenting with queries using a native GraphQL client. Altair is a popular option - [download and install it](https://altairgraphql.dev/#download). - -## Start - -Start a node by executing `defradb start`. Keep the node running while going through the following examples. - -Verify the local connection to the node works by executing `defradb client ping` in another terminal. - -## Configuration - -In this document, we use the default configuration, which has the following behavior: - -- `~/.defradb/` is DefraDB's configuration and data directory -- `client` command interacts with the locally running node -- The GraphQL endpoint is provided at - -The GraphQL endpoint can be used with a GraphQL client (e.g., Altair) to conveniently perform requests (`query`, `mutation`) and obtain schema introspection. - -## Add a schema type - -Schemas are used to structure documents using a type system. - -In the following examples, we'll be using a simple `User` schema type. - -Add it to the database with the following command. By doing so, DefraDB generates the typed GraphQL endpoints for querying, mutation, and introspection. - -```shell -defradb client schema add ' - type User { - name: String - age: Int - verified: Boolean - points: Float - } -' -``` - -Find more examples of schema type definitions in the [examples/schema/](https://github.com/sourcenetwork/defradb/examples/schema/) folder. - -## Create a document - -Submit a `mutation` request to create a document of the `User` type: - -```shell -defradb client query ' - mutation { - create_User(data: "{\"age\": 31, \"verified\": true, \"points\": 90, \"name\": \"Bob\"}") { - _key - } - } -' -``` - -Expected response: - -```json -{ - "data": [ - { - "_key": "bae-91171025-ed21-50e3-b0dc-e31bccdfa1ab", - } - ] -} -``` - -`_key` is the document's key, a unique identifier of the document, determined by its schema and initial data. - -## Query documents - -Once you have populated your node with data, you can query it: - -```shell -defradb client query ' - query { - User { - _key - age - name - points - } - } -' -``` - -This query obtains *all* users and returns their fields `_key, age, name, points`. GraphQL queries only return the exact fields requested. - -You can further filter results with the `filter` argument. - -```shell -defradb client query ' - query { - User(filter: {points: {_ge: 50}}) { - _key - age - name - points - } - } -' -``` - -This returns only user documents which have a value for the `points` field *Greater Than or Equal to* (`_ge`) 50. - -## Obtain document commits - -DefraDB's data model is based on [MerkleCRDTs](./guides/merkle-crdt.md). Each document has a graph of all of its updates, similar to Git. The updates are called `commits` and are identified by `cid`, a content identifier. Each references its parents by their `cid`s. - -To get the most recent commits in the MerkleDAG for the document identified as `bae-91171025-ed21-50e3-b0dc-e31bccdfa1ab`: - -```shell -defradb client query ' - query { - latestCommits(dockey: "bae-91171025-ed21-50e3-b0dc-e31bccdfa1ab") { - cid - delta - height - links { - cid - name - } - } - } -' -``` - -It returns a structure similar to the following, which contains the update payload that caused this new commit (`delta`) and any subgraph commits it references. - -```json -{ - "data": [ - { - "cid": "bafybeifhtfs6vgu7cwbhkojneh7gghwwinh5xzmf7nqkqqdebw5rqino7u", - "delta": "pGNhZ2UYH2RuYW1lY0JvYmZwb2ludHMYWmh2ZXJpZmllZPU=", - "height": 1, - "links": [ - { - "cid": "bafybeiet6foxcipesjurdqi4zpsgsiok5znqgw4oa5poef6qtiby5hlpzy", - "name": "age" - }, - { - "cid": "bafybeielahxy3r3ulykwoi5qalvkluojta4jlg6eyxvt7lbon3yd6ignby", - "name": "name" - }, - { - "cid": "bafybeia3tkpz52s3nx4uqadbm7t5tir6gagkvjkgipmxs2xcyzlkf4y4dm", - "name": "points" - }, - { - "cid": "bafybeia4off4javopmxcdyvr6fgb5clo7m5bblxic5sqr2vd52s6khyksm", - "name": "verified" - } - ] - } - ] -} -``` - -Obtain a specific commit by its content identifier (`cid`): - -```shell -defradb client query ' - query { - commits(cid: "bafybeifhtfs6vgu7cwbhkojneh7gghwwinh5xzmf7nqkqqdebw5rqino7u") { - cid - delta - height - links { - cid - name - } - } - } -' -``` - -## DefraDB Query Language (DQL) - -DQL is compatible with GraphQL but features various extensions. - -Read the [Query specification](./references/query-specification/query-language-overview.md) to discover filtering, ordering, limiting, relationships, variables, aggregate functions, and other useful features. - - -## Peer-to-peer data synchronization - -DefraDB leverages peer-to-peer networking for data exchange, synchronization, and replication of documents and commits. - -When starting a node for the first time, a key pair is generated and stored in its "root directory" (`~/.defradb/` by default). - -Each node has a unique `Peer ID` generated from its public key. This ID allows other nodes to connect to it. - -There are two types of peer-to-peer relationships supported: **pubsub** peering and **replicator** peering. - -Pubsub peering *passively* synchronizes data between nodes by broadcasting *Document Commit* updates to the topic of the commit's document key. Nodes need to be listening on the pubsub channel to receive updates. This is for when two nodes *already* have share a document and want to keep them in sync. - -Replicator peering *actively* pushes changes from a specific collection *to* a target peer. - -### Pubsub example - -Pubsub peers can be specified on the command line using the `--peers` flag, which accepts a comma-separated list of peer [multiaddresses](https://docs.libp2p.io/concepts/addressing/). For example, a node at IP `192.168.1.12` listening on 9000 with Peer ID `12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B` would be referred to using the multiaddress `/ip4/192.168.1.12/tcp/9000/p2p/12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B`. - -Let's go through an example of two nodes (*nodeA* and *nodeB*) connecting with each other over pubsub, on the same machine. - -Start *nodeA* with a default configuration: - -```shell -defradb start -``` - -Obtain the Peer ID from its console output. In this example, we use `12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B`, but locally it will be different. - -For *nodeB*, we provide the following configuration: - -```shell -defradb start --rootdir ~/.defradb-nodeB --url localhost:9182 --p2paddr /ip4/0.0.0.0/tcp/9172 --tcpaddr /ip4/0.0.0.0/tcp/9162 --peers /ip4/0.0.0.0/tcp/9171/p2p/12D3KooWNXm3dmrwCYSxGoRUyZstaKYiHPdt8uZH5vgVaEJyzU8B -``` - -About the flags: - -- `--rootdir` specifies the root dir (config and data) to use -- `--url` is the address to listen on for the client HTTP and GraphQL API -- `--p2paddr` is the multiaddress for the p2p networking to listen on -- `--tcpaddr` is the multiaddress for the gRPC server to listen on -- `--peers` is a comma-separated list of peer multiaddresses - -This starts two nodes and connects them via pubsub networking. - -### Collection subscription example - -It is possible to subscribe to updates on a given collection by using its ID as the pubsub topic. The ID of a collection is found as the field `schemaVersionID` in one of its documents. Here we use the collection ID of the `User` type we created above. After setting up 2 nodes as shown in the [Pubsub example](#pubsub-example) section, we can subscribe to collections updates on *nodeA* from *nodeB* by using the `rpc p2pcollection` command: - -```shell -defradb client rpc p2pcollection add --url localhost:9182 bafkreibpnvkvjqvg4skzlijka5xe63zeu74ivcjwd76q7yi65jdhwqhske -``` - -Multiple collection IDs can be added at once. - -```shell -defradb client rpc p2pcollection add --url localhost:9182 -``` - -### Replicator example - -Replicator peering is targeted: it allows a node to actively send updates to another node. Let's go through an example of *nodeA* actively replicating to *nodeB*: - -Start *nodeA*: - -```shell -defradb start -``` - -In another terminal, add this example schema to it: - -```shell -defradb client schema add ' - type Article { - content: String - published: Boolean - } -' -``` - -Start (or continue running from above) *nodeB*, that will be receiving updates: - -```shell -defradb start --rootdir ~/.defradb-nodeB --url localhost:9182 --p2paddr /ip4/0.0.0.0/tcp/9172 --tcpaddr /ip4/0.0.0.0/tcp/9162 -``` - -Here we *do not* specify `--peers` as we will manually define a replicator after startup via the `rpc` client command. - -In another terminal, add the same schema to *nodeB*: - -```shell -defradb client schema add --url localhost:9182 ' - type Article { - content: String - published: Boolean - } -' -``` - -Set *nodeA* to actively replicate the "Article" collection to *nodeB*: - -```shell -defradb client rpc addreplicator "Article" /ip4/0.0.0.0/tcp/9172/p2p/ -defradb client rpc replicator set -c "Article" /ip4/0.0.0.0/tcp/9172/p2p/ - -``` - -As we add or update documents in the "Article" collection on *nodeA*, they will be actively pushed to *nodeB*. Note that changes to *nodeB* will still be passively published back to *nodeA*, via pubsub. - - -## Securing the HTTP API with TLS - -By default, DefraDB will expose its HTTP API at `http://localhost:9181/api/v0`. It's also possible to configure the API to use TLS with self-signed certificates or Let's Encrypt. - -To start defradb with self-signed certificates placed under `~/.defradb/certs/` with `server.key` -being the public key and `server.crt` being the private key, just do: -```shell -defradb start --tls -``` - -The keys can be generated with your generator of choice or with `make tls-certs`. - -Since the keys should be stored within the DefraDB data and configuration directory, the recommended key generation command is `make tls-certs path="~/.defradb/certs"`. - -If not saved under `~/.defradb/certs` then the public (`pubkeypath`) and private (`privkeypaths`) key paths need to be explicitly defined in addition to the `--tls` flag or `tls` set to `true` in the config. - -Then to start the server with TLS, using your generated keys in custom path: -```shell -defradb start --tls --pubkeypath ~/path-to-pubkey.key --privkeypath ~/path-to-privkey.crt - -``` - -DefraDB also comes with automatic HTTPS for deployments on the public web. To enable HTTPS, - deploy DefraDB to a server with both port 80 and port 443 open. With your domain's DNS A record - pointed to the IP of your server, you can run the database using the following command: -```shell -sudo defradb start --tls --url=your-domain.net --email=email@example.com -``` -Note: `sudo` is needed above for the redirection server (to bind port 80). - -A valid email address is necessary for the creation of the certificate, and is important to get notifications from the Certificate Authority - in case the certificate is about to expire, etc. - - -## Conclusion - -This gets you started to use DefraDB! Read on the documentation website for guides and further information. diff --git a/docs/lensvm/lens-vm.md b/docs/lensvm/lens-vm.md new file mode 100644 index 0000000..23bc89b --- /dev/null +++ b/docs/lensvm/lens-vm.md @@ -0,0 +1,146 @@ +--- +sidebar_position: 1 +title: LensVM +slug: /lensvm +--- + +## Introduction + +LensVM is a bi-directional data transformation engine originally developed for DefraDB, now available as a standalone tool. It enables transforming data both forwards and in reverse directions using user-defined modules called Lenses, which are compiled to WebAssembly (WASM). Each Lens runs inside a secure, sandboxed WASM environment, enabling safe and modular pipelines, even when composed from multiple sources. + +This guide provides the foundational steps for writing and composing Lenses using the LensVM framework. It includes examples for Rust-based Lenses using the official SDK, as well as lower-level implementations without the SDK in other languages. + +## Before you begin + +Before getting started, ensure the following are installed: + +- [Golang](https://golang.google.cn/doc/install) (required to run the Lens engine) +- WASM-compatible compiler: Choose a compiler that targets the `wasm32-unknown` architecture, based on your preferred programming language. + +**Note**: The LensVM Engine executes Lenses in isolated WASM environments. It manages data flow, memory allocation, and function calls between the host application and the Lens module. + +## Writing lenses + +Lenses can be authored in any language that compiles to valid WebAssembly. To interface with the LensVM engine, each Lens must implement a specific set of exported and imported functions. + +### Required and optional functions + +Each Lens must implement the following interface: + +| Function | Type | Required | Description | +|:-----------------------|:----------|:---------|:------------| +| `alloc(unsigned64)` | Exported | Yes | Allocates a memory block of the given size. Called by the LensVM engine. | +| `next() -> unsigned8` | Imported | Yes | Called by the Lens to retrieve a pointer to the next input data item from the engine. | +| `set_param(unsigned8) -> unsigned8` | Exported | No | Accepts static configuration data at initialization. Receives a pointer to the config and returns a pointer to an OK or error response. Called once before any input is processed. | +| `transform() -> unsigned8` | Exported | Yes | Core transformation logic. Pulls zero or more inputs using `next()`, applies transformation, and returns a pointer to a single output item. Supports stateful transformations. | +| `inverse() -> unsigned8` | Exported | No | Optional reverse transformation logic, same interface as `transform()`. | + +### WASM data format + +LensVM communicates with Lenses using a binary format across the WASM boundary. The format is as follows: + +```json + +[TypeId][Length][Payload] +``` + +- **TypeId**: A signed 8-byte integer +- **Length**: An optional unsigned 32-byte integer, depending on the TypeId +- **Payload**: Raw binary or serialized data (e.g., JSON) + +#### TypeId Values + +| TypeId | Meaning | Notes | +|:-------|:--------------------|:------| +| `-1` | Error | May include an error message in the Payload. | +| `0` | Nil | No `Length` or `Payload`. | +| `1` | JSON | Payload contains a JSON-serialized object. | +| `127` | End of Stream | Signals that there are no more items to process. | + +### Developing with the Rust SDK + +To simplify development, LensVM provides a [Rust SDK](https://docs.rs/lens_sdk). It abstracts much of the boilerplate required to build Lenses, allowing you to focus on transformation logic. + +The SDK: + +- Implements the required interface automatically +- Handles safe memory and data exchange across the WASM boundary +- Provides helpful macros and utilities for Lens definition + +You can find it on [crates.io](https://crates.io/crates/lens_sdk) and in the official [GitHub repository](https://github.com/lens-vm/lens). + +### Example Lenses + +Example Lenses written in: + +- [Rust](https://www.rust-lang.org/) +- [AssemblyScript](https://www.assemblyscript.org/) + +can be found in this repository and in [DefraDB](https://github.com/sourcenetwork/defradb). + +## Basic Lens Example + +The easiest way to get started writing a Lens is by using Rust, thanks to the [`lens_sdk`](https://docs.rs/lens_sdk) crate, which provides helpful macros and utilities for Lens development. + +A minimal example is shown in the [`define!` macro documentation](https://docs.rs/lens_sdk/latest/lens_sdk/macro.define.html#examples). This example demonstrates a simple forward transformation that iterates through input documents and increments the `age` field by 1. + +## Writing a Lens using the SDK + +Writing a Lens with the Rust SDK is straightforward and well-documented. The examples provided in the [`lens_sdk` documentation](https://docs.rs/lens_sdk) build progressively: + +- **Example 1:** A minimal forward transformation using the `define!` macro. +- **Example 2:** Adds parameters and an inverse function to demonstrate bi-directional transformations. + +For more advanced examples, refer to the following repositories: + +- [Lens test modules](https://github.com/lens-vm/lens/blob/main/tests/modules) +- [DefraDB lens tests](https://github.com/sourcenetwork/defradb/tree/develop/tests/lenses) + +These cover schema-aware transformations, reversible pipelines, and other real-world use cases. + +## Writing a Lens without the SDK + +Creating a Lens without the Rust SDK is intended for advanced use cases—such as developing in non-Rust languages or needing fine-grained control over serialization and transformation behavior. + +Currently, the only working non-Rust example is written in [AssemblyScript](https://www.assemblyscript.org/introduction.html): + +- [AssemblyScript Lens Example](https://github.com/lens-vm/lens/blob/main/tests/modules/as_wasm32_simple/assembly/index.ts) + +This approach requires: + +- Manual implementation of memory allocation and serialization +- A deep understanding of the LensVM protocol +- Proficiency in AssemblyScript (or your chosen language) + +> **Recommendation:** For most users, we strongly recommend using the [Rust SDK](https://docs.rs/lens_sdk), even partially. It can significantly reduce development time and complexity. You can start with full SDK support and incrementally replace parts with custom logic as needed. + +## Composing Lenses + +Lenses can be composed into pipelines using the Go `config` sub-package: + +- [Go Config Package](https://github.com/lens-vm/lens/tree/main/host-go/config) + +Pipeline composition is handled via the `model.Lens` type: + +- [`model.Lens` Definition](https://github.com/lens-vm/lens/blob/main/host-go/config/model/lens.go) + +You can compose pipelines either by: + +- Supplying a `model.Lens` object directly +- Referencing a JSON configuration file (from local storage or a URL) that conforms to the `model.Lens` schema + +> **Note:** Composing Lenses does **not** execute them. Instead, it builds an enumerable pipeline object, which you can then iterate over to apply transformations. + +You can extend this enumerable pipeline by: + +- Adding additional Lenses through the `config` package +- Chaining in native Go-based enumerables for advanced customization + +### Composition Examples + +For practical examples of pipeline composition, explore the following: + +- [Go engine tests](https://github.com/lens-vm/lens/tree/main/host-go/engine/tests) +- [CLI integration tests](https://github.com/lens-vm/lens/tree/main/tests/integration/cli) + +These examples demonstrate how to build and extend Lens pipelines declaratively for various environments and workflows. diff --git a/docs/orbis/concepts/_category_.json b/docs/orbis/concepts/_category_.json new file mode 100644 index 0000000..8f84ff8 --- /dev/null +++ b/docs/orbis/concepts/_category_.json @@ -0,0 +1,5 @@ +{ + "label": "Concepts", + "position": 3 + +} \ No newline at end of file diff --git a/docs/orbis/concepts/dkg.md b/docs/orbis/concepts/dkg.md new file mode 100644 index 0000000..97de470 --- /dev/null +++ b/docs/orbis/concepts/dkg.md @@ -0,0 +1,33 @@ +--- +sidebar_position: 2 +--- + +# Distributed Key Generation + +Distributed Key Generation (DKG) is a cryptographic protocol that enables a group of participants to collaboratively generate a public-private keypair in a decentralized manner. Unlike traditional key generation methods, where a single entity holds the private key, DKG ensures that the private key is never known in its entirety by any single participant. Instead, each participant holds a "share" of the private key, and a minimum threshold of participants must cooperate to perform cryptographic operations. + +## Key Concepts + +1. **Decentralization**: + - **Definition**: DKG involves multiple nodes in the key generation process, preventing any single point of control or failure. Each node contributes to the creation of the keypair, and the private key is never reconstructed in its entirety by any individual node. + - **Purpose in Orbis**: Decentralization ensures that no single entity has full control over the secrets, aligning with the security and resilience goals of Orbis. + +2. **Key Shares**: + - **Definition**: Instead of holding the complete private key, each participant in the DKG protocol holds a share of the private key. These shares are generated during the DKG process and are essential for performing any cryptographic operations that require the private key. + - **Purpose in Orbis**: Key shares enable the system to distribute trust among multiple participants. A threshold number of shares must be combined to reconstruct the private key, ensuring that no single participant can unilaterally access the secret. + +3. **Threshold Scheme**: + - **Definition**: A threshold scheme is a cryptographic mechanism that allows a specified minimum number of participants (threshold) to cooperate in performing a cryptographic operation, such as signing or decrypting a message. The threshold is chosen during the DKG process and defines the minimum number of shares required to reconstruct the private key. + - **Purpose in Orbis**: The threshold scheme provides a balance between security and availability. It ensures that the system remains functional even if some participants are unavailable or compromised, while also protecting against unauthorized access. + +## DKG in Orbis + +In the Orbis system, DKG is a foundational component that underpins the decentralized custodial model. By using DKG, Orbis achieves the following: + +- **Security**: The private key is never exposed in its entirety, and no single participant can access it alone. This enhances the security of the secrets managed by Orbis. +- **Fault Tolerance**: The threshold scheme allows the system to continue functioning even if some participants are compromised or unavailable. This resilience is crucial for maintaining the availability and integrity of the secrets. +- **Trust Distribution**: By distributing key shares among multiple participants, Orbis eliminates the need for a trusted central authority, reducing the risk of centralized control and single points of failure. + +## Conclusion + +Distributed Key Generation (DKG) is a critical cryptographic technique that enables Orbis to maintain a decentralized and secure environment for secret management. By ensuring that the private key is never fully known by any single participant, DKG provides strong security guarantees and supports the system's overall goals of decentralization and fault tolerance. Through DKG, Orbis achieves a robust and resilient architecture for managing secrets in a decentralized manner. \ No newline at end of file diff --git a/docs/orbis/concepts/mpc.md b/docs/orbis/concepts/mpc.md new file mode 100644 index 0000000..4cf4821 --- /dev/null +++ b/docs/orbis/concepts/mpc.md @@ -0,0 +1,32 @@ +--- +sidebar_position: 1 +--- +# Multi-Party Computation + +Multi-Party Computation (MPC) is a cryptographic technique that allows multiple parties to jointly compute a function over their inputs while keeping those inputs private. In the context of Orbis, MPC plays a crucial role in ensuring the security and integrity of secret management without a single point of failure. + +## Key Concepts + +1. [**Distributed Key Generation (DKG)**](/orbis/concepts/dkg): + - **Definition**: A decentralized method for collaboratively generating a cryptographic keypair. All participating nodes know the public key, but no single node knows the private key. Instead, each node holds a "share" of the private key. + - **Purpose in Orbis**: DKG is used to create a shared keypair for a Secret Ring, ensuring that the private key remains unknown to any single actor. This keypair is essential for securely encrypting secrets. + +2. [**Proxy Re-Encryption (PRE)**](/orbis/concepts/pre): + - **Definition**: A cryptographic mechanism that allows encrypted data (ciphertext) to be transformed from one public key to another without revealing the underlying plaintext. + - **Purpose in Orbis**: PRE is employed to transfer encrypted secrets from the Secret Ring's public key to a requesting party's ephemeral public key. This transformation is done securely and privately, ensuring that neither the system nor the nodes involved in the process ever see the plaintext. + +3. [**Proactive Secret Sharing (PSS)**](/orbis/concepts/pss): + - **Definition**: An algorithm that periodically redistributes the shares of a private key among nodes without changing the long-term keypair. This process mitigates the risk of adversaries compromising the system over time. + - **Purpose in Orbis**: PSS ensures the long-term security of the secret management system by preventing adversaries from accumulating enough shares to reconstruct the private key. It does so by periodically refreshing the shares, making it impossible for an adversary to exploit nodes indefinitely. + +## MPC in Orbis + +In Orbis, MPC techniques are foundational to the system's decentralized custodial model. They provide the following benefits: + +- **Security**: By ensuring that no single actor can access the entire secret or private key, MPC protects against unauthorized access and single points of failure. +- **Verifiability**: The use of MPC allows for the secure verification of cryptographic operations, ensuring that only authorized parties can access secrets. +- **Byzantine Fault Tolerance**: The system can tolerate a certain number of faulty or malicious nodes without compromising the integrity of the secret management process. + +## Conclusion + +MPC enables Orbis to maintain a decentralized and secure environment for secret management. By leveraging DKG, PSS, and PRE, Orbis achieves a robust system that is resistant to various attack vectors while ensuring that secrets are only accessible to authorized parties. These cryptographic protocols underpin the core functionality of Orbis, making it a reliable and secure solution for decentralized custodial secret management. \ No newline at end of file diff --git a/docs/orbis/concepts/pre.md b/docs/orbis/concepts/pre.md new file mode 100644 index 0000000..825be1e --- /dev/null +++ b/docs/orbis/concepts/pre.md @@ -0,0 +1,37 @@ +--- +sidebar_position: 3 +--- + +# Proxy Re-Encryption + +![Proxy ReEncryption](/img/orbis/pre.png) +*image credit: https://medium.com/nucypher/unveiling-umbral-3d9d4423cd71* + +Proxy Re-Encryption (PRE) is a cryptographic technique that allows ciphertexts encrypted under one public key to be re-encrypted to another public key without revealing the underlying plaintext. This transformation is performed using a special cryptographic key called a re-encryption key (ReKey). In the context of Orbis, PRE enables secure and private transfer of encrypted secrets between different parties, ensuring that the secret is never exposed to intermediaries. + +## Key Concepts + +1. **Re-Encryption Key (ReKey)**: + - **Definition**: A special cryptographic key generated to convert ciphertext encrypted under one public key (A) to another public key (B). The ReKey is derived from the private key associated with public key A and the public key B. + - **Purpose in Orbis**: ReKeys are used to securely transfer encrypted secrets from the Secret Ring's public key to a requesting user's ephemeral public key, enabling the user to decrypt the secret without exposing it to the Secret Ring or any intermediaries. + +2. **Delegated Re-Encryption**: + - **Definition**: The process of re-encrypting ciphertext using a ReKey, which can be performed by an untrusted third party or server. The third party does not gain access to the plaintext during this process. + - **Purpose in Orbis**: Delegated re-encryption allows the Orbis system to offload the re-encryption process to nodes without risking exposure of the underlying secret. This capability is essential for maintaining the privacy and security of the user's data. + +3. **Ciphertext Transformation**: + - **Definition**: The process by which encrypted data (ciphertext) is converted from being encrypted under one public key to another, using a ReKey. This transformation ensures that the data remains encrypted throughout the process. + - **Purpose in Orbis**: Ciphertext transformation is used to manage access to secrets without decrypting them. For example, a secret encrypted under the Secret Ring's public key can be transformed to be decryptable only by the intended recipient's public key, maintaining confidentiality. + +## PRE and DKG in Orbis + +In the Orbis system, PRE and [Distributed Key Generation (DKG)](/orbis/concepts/dkg) work in tandem to provide a robust and secure framework for secret management: + +- **DKG**: DKG enables the decentralized creation of a shared public-private keypair, with the private key split into shares held by different nodes. This setup ensures that no single participant has access to the full private key, enhancing security. +- **PRE**: Utilizing the public key generated by the DKG process, PRE allows for the secure re-encryption of data. The private key shares generated during DKG are used to create ReKeys without reconstructing the full private key, thus maintaining security and privacy. This integration ensures that even though the secret's ciphertext is transformed between different public keys, the underlying plaintext remains protected from unauthorized access. + +The combination of DKG and PRE in Orbis ensures that secrets can be securely managed and transferred across different parties, all while preserving the integrity and confidentiality of the data. + +## Conclusion + +Proxy Re-Encryption (PRE) is a powerful cryptographic primitive that, in conjunction with [Distributed Key Generation (DKG)](/orbis/concepts/dkg), enables secure and private data transfer within the Orbis system. By leveraging ReKeys and delegated re-encryption, Orbis ensures that secrets are securely managed and transferred without exposing the plaintext to unauthorized parties. PRE and DKG together provide robust security and privacy guarantees, making Orbis a reliable solution for decentralized custodial secret management. \ No newline at end of file diff --git a/docs/orbis/concepts/pss.md b/docs/orbis/concepts/pss.md new file mode 100644 index 0000000..2e2d69c --- /dev/null +++ b/docs/orbis/concepts/pss.md @@ -0,0 +1,34 @@ +# Proactive Secret Sharing + +Proactive Secret Sharing (PSS) is a cryptographic protocol that enhances the security of a shared secret over time by periodically refreshing the shares held by participants. This process prevents adversaries from accumulating shares to eventually reconstruct the secret key. PSS is crucial in systems where long-term security is paramount, as it mitigates the risk of a compromise that may occur gradually over time. + +## Key Concepts +1. **Secret Shares**: + - **Definition**: In PSS, a secret (such as a private key) is divided into shares, with each participant holding one share. The secret can only be reconstructed with a sufficient number of shares, known as the threshold. + - **Purpose in Orbis**: By distributing shares among multiple nodes, Orbis ensures that no single entity has complete control over the private key, enhancing the system's security. + +2. **Share Refreshing**: + - **Definition**: The process of periodically updating the secret shares without changing the underlying secret. This process prevents adversaries from collecting enough shares over time to reconstruct the secret. + - **Purpose in Orbis**: Share refreshing ensures that even if some shares are compromised, the secret remains secure as the shares are periodically updated, nullifying the compromised shares' usefulness. + +3. **Epochs**: + - **Definition**: Time intervals at the end of which the shares are refreshed. The length of an epoch is determined based on the security requirements and threat model. + - **Purpose in Orbis**: Epochs provide a temporal boundary for when shares must be refreshed, ensuring that the system maintains its security properties over time. + +#### PSS, DKG, and PRE in Orbis + +In the Orbis system, PSS works alongside [Distributed Key Generation (DKG)](/orbis/concepts/dkg) and [Proxy Re-Encryption (PRE)](/orbis/concepts/pre) to provide a comprehensive and secure framework for secret management: + +- **DKG**: DKG is responsible for the initial generation of a shared keypair, with the private key split into shares distributed among participants. These shares are essential for performing cryptographic operations securely. +- **PSS**: While DKG establishes the initial distribution of secret shares, PSS ensures the long-term security of the system by periodically refreshing these shares. This prevents the gradual accumulation of shares by adversaries, thus protecting the secret's confidentiality. +- **PRE**: PRE utilizes the public key generated by DKG and the shares maintained and refreshed by PSS to securely transfer ciphertext between different public keys. PSS ensures that the private key shares used in creating ReKeys remain secure over time, maintaining the integrity of the re-encryption process. + +Together, DKG, PSS, and PRE form a robust security model in Orbis. DKG initiates a secure keypair, PSS maintains the security of the keypair over time, and PRE facilitates the secure and private transfer of encrypted secrets. + +## Conclusion +Proactive Secret Sharing (PSS) is a critical component in ensuring the long-term security and integrity of the Orbis system. By periodically refreshing secret shares, PSS prevents adversaries from exploiting compromised shares over time. When combined with [Distributed Key Generation (DKG)](/orbis/concepts/dkg) and [Proxy Re-Encryption (PRE)](/orbis/concepts/pre), PSS provides a resilient and secure framework for decentralized custodial secret management. This combination ensures that secrets are managed securely, remain confidential, and are accessible only to authorized parties. + + + + + diff --git a/docs/orbis/getting-started/1-install.md b/docs/orbis/getting-started/1-install.md new file mode 100644 index 0000000..ecfcaa6 --- /dev/null +++ b/docs/orbis/getting-started/1-install.md @@ -0,0 +1,35 @@ +# Installing Orbis +You can get the `orbisd` binary from the releases page of the Orbis repo: [https://github.com/sourcenetwork/orbis-go/releases/tag/v0.2.3](https://github.com/sourcenetwork/orbis-go/releases/tag/v0.2.3). +```bash +cd $HOME +wget https://github.com/sourcenetwork/orbis-go/releases/download/v0.2.3/orbisd +chmod +x orbisd +sudo mv /usr/bin +``` + +### From Source +You can download the code and compile your own binaries if you prefer. However you will need a local installation of the `go` toolchain at a minimum version of 1.21 +```bash +cd $HOME +git clone https://github.com/sourcenetwork/orbis-go +cd orbis-go +git checkout v0.2.3 +make build +cp ./build/orbisd $GOBIN/orbisd +export PATH=$PATH:$GOBIN +``` +Now you will have the `orbisd` available in your local system. + +## Docker +You can either use the pre-existing docker image hosted on our GitHub, or build your own + +`docker pull ghcr.io/sourcenetwork/orbis:0.2.3` + +### Build Docker Image from Source +```bash +cd $HOME +git clone https://github.com/sourcenetwork/orbis-go +cd orbis-go +git checkout v0.2.3 +docker build -t . +``` \ No newline at end of file diff --git a/docs/orbis/getting-started/2-create.md b/docs/orbis/getting-started/2-create.md new file mode 100644 index 0000000..caef2d6 --- /dev/null +++ b/docs/orbis/getting-started/2-create.md @@ -0,0 +1,95 @@ +# Create a Secret Ring + +***To skip ahead to using an existing Secret Ring deployment, +proceed to [Storing a Secret](./secrets).*** + +--- + +Secret Rings are a deployments of Orbis nodes with a single shared [distributed key pair](/orbis/concepts/dkg). To create a Secret Ring, you'll need a minimum of 3 Orbis nodes. + +## Docker Compose +The easiest method to get a local Secret Ring deployed is with our Docker Compose setup, which will initialize 3 orbis nodes, as well as the authorization service (Zanzi), and finally will bootstrap the peer-to-peer network. You will need **both** docker and docker-compose setup locally. + +:::warning +The following demo docker compose file uses deterministic private keys and ***IS NOT*** suitable for production deployments. Use at your own risk. +::: + +The docker compose file is hosted in the `demo/zanzi` folder on the Orbis repo, found [here](https://github.com/sourcenetwork/orbis-go/blob/develop/demo/zanzi/compose.yaml). + +To run the docker compose file +```bash +cd $HOME +git clone https://github.com/sourcenetwork/orbis-go +cd orbis-go +docker-compose -f demo/zanzi/compose.yaml up +``` + +## Ring Manifest +The Ring Manifest is the genesis file that describes the initial parameters of a Secret Ring. This includes the authentication and authorization schemes, which DKG protocol to use, etc. Importantly, it also has the addresses and `PeerIDs` of the nodes that will initialize the Ring. + +Here is the manifest for the root ring we will create for the set of nodes we have created with the docker-compose script above. To learn more about each parameter, please refer to the [Manifest Reference](/orbis/reference/manifest) doc. +```json +{ + "manifest": { + "n": 3, + "t": 2, + "dkg": "rabin", + "pss": "avpss", + "pre": "elgamal", + "bulletin": "p2p", + "transport": "p2p", + "authentication": "jws-did", + "authorization": "zanzi", + "nodes": [ + { + "id":"16Uiu2HAmLyq5HXHSFxvxmGcsnwsYPZvTDtfe3CYnWDK8jDAHhJC5", + "address":"/ip4/127.0.0.1/tcp/9000" + }, + { + "id":"16Uiu2HAmR7vXGm8Zohvs6cD3PtGSyAUFRDWypsAcmkiyMYTLhEe4", + "address":"/ip4/23.88.72.49/tcp/9000" + }, + { + "id":"16Uiu2HAmVBRogwVBbByVvsYywp2jUNBqfe1zTFNtaVMRvSyUndPX", + "address":"/ip4/138.201.36.107/tcp/9000" + }, + ], + "nonce": 0 + } +} +``` + +Save the above manifest defition to `manifest.json`. Then we can initialize the ring on each node in the ring. Each node *must* be configured independently with the `create-ring` command. + +```bash +# Run on each node +orbisd -s ring-service create-ring -f /manifest.json +``` + +This will initialze the DKG process and you node will start generating its respective DKG Share. This process can't complete unless all nodes are online and syncing. Once completed, the Secret Ring will have an initialized DKG, and is ready to start recieving Store and ReEncrypt requests! + +## Orbis Client +```bash +# Env Vars +export ORBIS_CLIENT_FROM="alice" +export ORBIS_CLIENT_SERVER_ADDR=":8081" +export ORBIS_CLIENT_AUTHZ_ADDR=":8080" +export ORBIS_CLIENT_RING_ID="zQ123" + +# Create Policy +orbisd client policy create -f policy.yaml => policy-id=0x123 +orbisd client policy describe 0x123 => policy-data=... + +# Create Secret (managed authorization) +orbisd client put "mysecret" --authz managed --policy 0x123 --resource secret --permission read +orbisd client get ABC123 + +# Create Secret (unmanaged authorization) +orbisd client policy register 0x123 secret mysecret +orbisd client put "mysecret" --authz unmanaged --permission "0x123/secret:mysecret#read" +orbisd client get ABC123 + +# Add Bob as a reader +orbisd client policy set 0x123 secret mysecret collaborator did:key:bob +orbisd client policy check 0x123 did:key:bob secret:mysecret#read => valid=true/false +``` \ No newline at end of file diff --git a/docs/orbis/getting-started/3-policy.md b/docs/orbis/getting-started/3-policy.md new file mode 100644 index 0000000..1ff779a --- /dev/null +++ b/docs/orbis/getting-started/3-policy.md @@ -0,0 +1,51 @@ +# Setup Authorization Policy +Before we can start storing secrets into the newly created Secret Ring, we must define our access policy. This access policy will determine the resources and permissions that users will need to authorize with to recover secrets. + +This section will assume you are using the `Zanzi` Authorization GRPC Service that is included in the example `docker-compose.yaml` file from the previous step. If you are using the `SourceHub ACP` module instead, you can reference the [Create a SourceHub ACP Policy](/sourcehub/getting-started/create-a-policy) doc. + +## Create a Policy + +The `Zanzi` Authorization GRPC Service is a [Zanzibar](/sourcehub/concepts/zanzibar) based global decentralized authorization system. Developers write policies using our Relation-Based Access Control (RelBAC) DSL, which allows you to define resources, relations, and permissions. + +- **Resources**: Generic container for some kind of "thing" you wish to gate access to or provide authorization for. It can be anything from a secret on [Orbis](/orbis), a document on [DefraDB](/defradb), or any other resource. + +- **Relation**: Named connections between resources. Similar to a database table that might have relations between its schema types, so does the SourceHub ACP module. This allows us to create expressive policies that go beyond traditional *Role-Based* or *Attribute-based* access control. + +- **Permissions**: Computed queries over resources, relations, and even other permissions (they're recursive!). + +### Secret Resource Policy +Here we are going to create a basic policy to govern access to our secrets in Orbis. + +Create a file named `orbis-policy.yaml` and paste the following: +```yaml +name: Orbis Secret Policy + +resources: + + secret: + relations: + owner: + types: + - actor + collaborator: + types: + - actor + + permissions: + read: + expr: owner + collaborator + edit: + expr: owner + delete: + expr: owner +``` + +Now that we have our policy defined, we can upload it to the `Zanzi` service +```shell +orbisd client policy create -f orbis-policy.yaml +``` + +Which will return back the created policy ID. We can confirm the policy has been created by using the `policy describe` command. +```bash +orbisd client policy describe +``` \ No newline at end of file diff --git a/docs/orbis/getting-started/4-secrets.md b/docs/orbis/getting-started/4-secrets.md new file mode 100644 index 0000000..ad18ccd --- /dev/null +++ b/docs/orbis/getting-started/4-secrets.md @@ -0,0 +1,77 @@ +# Store, Recover, and Share Secrets +Now that we have initialized our orbis nodes, defined our manifest, created the secret ring, and uploaded our policy we are finally ready to store secrets into our Ring! + +## Store Secrets + +To store a secret in a ring, theres a series of steps that clients must execute (thankfully our CLI Client does this all for you). The TLDR is that secrets must be +- Encrypted to the Secret Rings DKG Public Key +- Proof-of-Encryption commitment that cryptographically proves the first step is done correctly +- Client sends the encrypted secret and commitment proof, along with the authorization policy details to any of the Orbis nodes within the Secret Ring as a `store-secret` request. +- Finally the nodes broadcast the `store-secret` request amungst eachother. + +To create and store a secret, we have two methods for how the client will create the authorization policy information for the `store-secret` request. These methods are called `Managed` and `Unmanaged`. + +With `Managed` authorization, the client will automatically register the required zanzibar resource to the policy when storing the secret. + +The `Unmanaged` authorization has no automated registration functionality, so the caller is responsible for A) Ensuring the correct resources are registed in the policy B) They correctly create the Zanzibar authorization permission string, and ensure themselves have permission to access the supplied resource. + +### Managed Example +To create a `managed` secret with policy defined in the previous step: +```bash +orbisd client --ring-id put "MySuperSecretInformation" --authz managed --policy --resource secret --permission read +``` + +Here the client will get the `` aggregate public key, encrypt `MySuperSecretInformation` to the public key, create the proof, locally generate the `` (determinstic), register the `` as a new object in the `` policy with ourselves set as the `owner`, craft the full permission object `/secret:#read`, and send the full `store-secret` request with all this information to the Secret Ring. + +### Unmanaged Example +To create an `unmanaged` secret, first we manually register an object into the policy, then create the secret. +```bash +# Register an object called `MySecretName` +orbisd client policy register secret +# Create the secret using the `MySecretName` authorization object +orbisd client put "MySuperSecretInformation" --authz unmanaged --permission "/secret:#read" +``` + +You may have noticed the difference when executing the `unmanaged` request compared to the `managed` request is that we can manually define `` instead of using the deterministically generated ``. This is because we seperated the `register` command from the `put` command. Moreover, we supplied the `--permission` as the fully +qualified zanizbar tuple `/secret:#read` as a single string. + +In both examples, the finally request sent to the Secret Ring is similar, the encrypted secret is verified against the proof commitment, and finally broadcast to the network bulletin so it can be recieved by all the other nodes. + +## Recover Secrets +Now that we have stored a secret, we can send a `recover` request which will trigger a Threshold [Proxy Re-Encryption](/orbis/concepts/pre) request. This request will first be authenticated to ensure the client requesting has the correct Proof-of-Possession JWS token, then authorized by each node before re-encryption. The authorization runs a `Check` call against the `Zanzi` authorization service to determine if client has the correct permission as configured when the secret was first created. + +In the above example, this means the `Check` call will determine if the client has the `read` permission on the provided secret resource within the `` policy. + +```bash +orbisd client get +``` +> Note: You will need to replace `` with `` if you used the `unmanaged` authorization. + +This will automatically craft the proof-of-possession JWS token, intiate the proxy re-encryption request, wait for and finally decrypt the response. + + +## Share a Secret +To share a secret with another person, we can update the policy state so that our target subject has the necessary permissions to access the secret. From our [example policy](/orbis/getting-started/policy#create-a-policy) before our `secret` resource has an additional `collaborator` relation. Moreover, the `read` permission is the expression `owner + collaborator` which means either the `owner` or the `collaborator` can `read`. + +We're going to add `Bob` identified by `did:key:z6Mkmyi3eCUYJ6w2fbgpnf77STLcnMf6tuJ56RQmrFjce6XS` as a `collaborator` so he can inheir the `read` permission and therefore access our stored secret. + +:::info +Reminder, there is nothing specific about the names/labels used within our policy. Although we use the `read` permission, there is no special power or requirement to name your permission `read`, it can be any valid name. +::: + +```bash +orbisd keys add bob +export BOB=$(orbisd keys show bob --did) +orbisd client policy set secret collaborator $BOB +``` + +We can confirm that the policy relation was created by manually running a `Check` command. +```bash +orbisd client policy check secret:#read +``` +Which should return `valid: true`. + +Finally `Bob` can also recover the secret +```bash +orbisd client --from bob get +``` diff --git a/docs/orbis/getting-started/_category_.json b/docs/orbis/getting-started/_category_.json new file mode 100644 index 0000000..27e66a2 --- /dev/null +++ b/docs/orbis/getting-started/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Getting Started", + "position": 2 +} \ No newline at end of file diff --git a/docs/orbis/networks/_category_.json b/docs/orbis/networks/_category_.json new file mode 100644 index 0000000..a6f0323 --- /dev/null +++ b/docs/orbis/networks/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Networks" + } + \ No newline at end of file diff --git a/docs/orbis/networks/testnet/_category_.json b/docs/orbis/networks/testnet/_category_.json new file mode 100644 index 0000000..5ab6ca9 --- /dev/null +++ b/docs/orbis/networks/testnet/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Testnet 1" + } + \ No newline at end of file diff --git a/docs/orbis/networks/testnet/join.md b/docs/orbis/networks/testnet/join.md new file mode 100644 index 0000000..b35a03d --- /dev/null +++ b/docs/orbis/networks/testnet/join.md @@ -0,0 +1,267 @@ +--- +title: Join Root Ring +--- + +# How to Join Testnet 1 Root Ring +The following will detail the necessary steps to join Testnet 1 Orbis Root Ring as a validator. Only the existing and approved validators can operate a Orbis node in the Root Ring. + +## Hardware Requirements +Orbis doesn't have any specific requirements from a hardware perspective at the moment, however it is recommended to use a "good enough" datacenter machine. + +Orbis can be deployed along side your existing validator hardware as a co-process. Or on a seprate machine entirely. + +Minimum hardware requirements: +* 2-core amd64 CPU (or 2 virtual cores) +* 4GB RAM +* 32GB SSD Storage +* 100Mbps bi-directional internet connection. + +## Orbis Binary +You can get the `orbisd` binary from the releases page of the Orbis repo: [https://github.com/sourcenetwork/orbis-go/releases/tag/v0.2.3](https://github.com/sourcenetwork/orbis-go/releases/tag/v0.2.3). +```bash +cd $HOME +wget https://github.com/sourcenetwork/orbis-go/releases/download/v0.2.3/orbisd +chmod +x orbisd +sudo mv /usr/bin +``` + +### From Source +You can download the code and compile your own binaries if you prefer. However you will need a local installation of the `go` toolchain at a minimum version of 1.21 +```bash +cd $HOME +git clone https://github.com/sourcenetwork/orbis-go +cd orbis-go +git checkout v0.2.3 +make build +cp ./build/orbisd $GOBIN/orbisd +export PATH=$PATH:$GOBIN +``` +Now you will have the `orbisd` available in your local system. + +## Docker +You can either use the pre-existing docker image hosted on our GitHub, or build your own + +### Github Container Registry (coming soon) +`docker pull ghcr.io/sourcenetwork/orbis:0.2.3` + +### Build Docker Image from Source +```bash +cd $HOME +git clone https://github.com/sourcenetwork/orbis-go +cd orbis-go +git checkout v0.2.3 +docker build -t . +``` + +## Docker Compose +> TODO + +## Initialization +To deploy our orbis node and to join the Root Ring running on [SourceHub Testnet 1](/sourcehub/testnet/overview), generate or use an existing sourcehub keypair, and update your respective configuration. + +Orbis will also need access to a running SourceHub Testnet 1 RPC Endpoint, this can either be: +* A) The RPC endpoint of your existing validator node, if you are running both daemons on a single machine +* B) Your own hosted RPC endpoint +* C) A public RPC endpoint + * http://rpc1.testnet1.source.network:26657 + * http://rpc2.testnet1.source.network:26657 + +### Key Ring +As for the sourcehub keypair, Orbis needs access to a sourcehubd keyring instance. You already have one running on your validator node, but you can also create a keyring on any node with `sourcehubd keys add `. If you are running both `orbisd` and `sourcehubd` on the same machine, then you can share keys between your validator and orbis node. + +The `keyname` you have already, or if you create a new one, must match what you specify in the `orbis.yml` config, defined below. + +By default `sourcehubd keys add ` will use the `os` keyring backend, you may use other key backends supported by the SourceHub keyring utility, but whatever backend you use *MUST* match what is in your `orbis.yml` config. To learn more about various backends you can see the documentation available from the `sourcehubd keys --help` command. + +:::caution +If you start your node and you get an error similar to `error: bech32 decode ...` then you likely have either used the wrong `keyname` or `keyringBackend`. +::: + +### Faucet +To be able to properly run the Orbis daemon and join the Root Ring, you need to ensure your account for the key you chose has sufficent number of $OPEN tokens, since initializating the DKG requires transactions on the network that need to be paid for. + +As usual, you can go the [SourceHub Testnet 1](https://faucet.source.network/) and use the address of the new key `sourcehubd keys show --address`. + +## Configuration +You may have to configure orbis to work in your specific environment or hardware, depending on your storage and networking resources, here is a pretty standard configuration file `orbis.yml` +```yaml +grpc: + # GRPC Endpoint + grpcURL: "0.0.0.0:8080" + # Rest API Endpoint (optional) + restURL: "0.0.0.0:8090" + # GRPC Request logging (true/false) + logging: true + +logger: + # Default log level ("fatal", "error", "warn", "info", "debug") + level: "info" + +host: + # P2P Host Address (required - see below) + listenAddresses: + - /ip4/0.0.0.0/tcp/9000 + # P2P Boostrap peers (required - see below) + bootstrap_peers: + - /dns4//tcp/9000/p2p/ + +transport: + # P2P Peer exchange topic (required) + rendezvous: "orbis-transport" + +db: + # DB data store path, prefixed with $HOME/.orbis + # So "data" would result in a path $HOME/.orbis/data + path: "data" + +cosmos: + # Cosmos chain ID + chainId: sourcehub-testnet1 + # Cosmos keyring key name + accountName: + # Cosmos keyring backend ('os' is the default when running sourcehubd) + keyringBackend: os + # Cosmos address prefix ('source' is for SourceHub) + addressPrefix: source + # Transaction fees + fees: 2000uopen + # SourceHub data folder where your keyring is defined + home: ~/.sourcehub + rpcAddress: +``` + +When starting an Orbis daemon, you can specify the config file path with the `--config ` flag. + +## Configuration Requirements +:::info +required `host.listenAddress` +::: + +You *MUST* run orbis with a publicly available `host.listenAddress`. In the above example config this is the +```yaml +listenAddresses: + - /ip4/0.0.0.0/tcp/9000 +``` +You can specify your own port to listen on, but the host port and bind address must result in a publicly available listener. This is the port that all the orbis nodes communicate their P2P network traffic. + +:::info +required `host.bootstrapPeers` +::: + +You *MUST* use the `host.boostrapPeers`address provided in the [#validator-info](https://discord.com/channels/427944769851752448/1207162452337360936) channel in the Source Network discord. +```yaml + bootstrap_peers: + - /dns4//tcp/9000/p2p/ +``` + +The GRPC and Rest address/port don't necessarily have to be public, this can match whatever deployment environment you are currently using, however if you want to access these APIs from other machines, you must make the appropriate port and network rules to do so. + +### Start your node +Once you have configured your keyring and config file, you can start you node. +```bash +orbisd start --config /path/to/orbis.yml +``` +:::info +When starting your orbis daemon, your configured `cosmos.rpcAddress` endpoint **must** be live, since it tries to connect to this node immediatly upon startup. If you have any errors relating to `cosmos client` then you have either used the wrong `rpcAddress` or the RPC node isn't online. +::: + +### SystemD Service (Optional) +If you wish to use systemd to manage your daemon, you can use the following configuration. + +Create the following file: `/etc/systemd/system/orbisd.service` +```bash +[Unit] +Description=Orbis service +After=network-online.target + +[Service] +User= +ExecStart=//orbisd start --config +Restart=no +LimitNOFILE=4096 + +[Install] +WantedBy=multi-user.target +``` + +## Joining the Root Ring +The *Root Ring* is the Orbis deployment maintained by the validators of the SourceHub Testnet. The primary function of nodes maintaining the Root Ring is to create a shared keypair using a Distributed Key Generation (DKG) algorithm. + +A Orbis ring is defined by its *Manifest* which is a configuration that describes all the initial parameters of a ring, such as what DKG algorithm to use, how to authenticate and authorize requests, the threshold number of nodes required for a proxy-encryption, etc. You can think of a ring manifest similar to a network genesis file. + +***Example Manifest*** +```json +{ + "n": 3, // total number of initial nodes in the ring + "t": 2, // threshold number of nodes for a proxy-encryption operation + "dkg": "rabin", // DKG algorithm + "pss": "avpss", // Proactive Secret Sharing algorithm + "pre": "elgamal", // Proxy-Encyption algorithm + "bulletin": "sourcehub", // bulletin board implementation + "transport": "p2p", // networking transport + "authentication": "jws-did", // encryption request authentication + "authorization": "ACP", // encryption request authorization + "nodes": [ // PeerIDs and Addresses of the initial nodes + {"id":"16Uiu2HAm35sSr96x1TJHBTkWdcDH9P8twhTw92iDyq38XvyGzgZN","address":"/ip4/127.0.0.1/tcp/9001"}, + {"id":"16Uiu2HAmAVcM6V1PY8DdvzobyK5QZbwX5z3AA6wCSrCm6xUA79Xn","address":"/ip4/127.0.0.1/tcp/9002"}, + {"id":"16Uiu2HAkzjLLosHcV4LGvLY4vskda5NgMW4qmtfQ2uMbgFAoqghX","address":"/ip4/127.0.0.1/tcp/9003"} + ], + "nonce": 0 // nonce to alter the determinisic ring content identifier (CID) +} +``` + +### Creating the Root Ring Manifest +To create the Root Ring manifest we need all the initial nodes PeerIDs and P2P Addresses. In fact, the rest of the parameters of the Root Ring manifest will be the same as the example above, with the exception of +* `Number of nodes (n)` +* `Threshold number of nodes (t)` +* `Initial (nodes)` + +The root ring manifest *cannot* be created until all existing validators start their Orbis nodes, and report back with the respective PeerIDs and Addresses. + +Once you have a running node, you can get your PeerID and Address with the following command: +```bash +orbisd -s transport-service get-host --transport p2p +``` +This will respond back with something like: +```json +{ + "node":{ + "id":"16Uiu2HAm35sSr96x1TJHBTkWdcDH9P8twhTw92iDyq38XvyGzgZN", + "address":"/ip4//tcp/9001", + "publicKey":{ + "Type":"Secp256k1", + "Data":"AnHK2Co3LgeLHo8tMsyfMlp0JXGL3y9yiPOAkQVknfrp" + } + } +} +``` + +:::info +Sometimes the `address` field returns `127.0.0.1` instead of a public address, if so, please replace it with your nodes publicly accessible IP address +::: + +Please post this *full* response in the [#validator-general](https://discord.com/channels/427944769851752448/1200236096089509918) channel, this is all public information, so no security details are being leaked. + +Once all the current validators have started their nodes, and posted their host info, we can craft the Root Ring manifest. + +### Create Root Ring +:::info +We now have the assembled `manifest.json` file needed to start the root ring. You can download the root ring manifest for Testnet1 [here](https://github.com/sourcenetwork/networks/blob/15ad45d92e6c012150e764db0c9e7f9e1e25c64f/testnet1/orbis-root-ring/manifest.json) +::: +**ALL NODES MUST BE UPDATED TO v0.2.3 BEFORE CREATING THE RING** +To create the root ring, and start the Ditributed Key Generation (DKG) process to create the shared keypair, each node will need download the manifest file [here](https://github.com/sourcenetwork/networks/blob/15ad45d92e6c012150e764db0c9e7f9e1e25c64f/testnet1/orbis-root-ring/manifest.json) and then run the `create-ring` command, as shown below. + +**NOTE** You must ensure you have sufficient $OPEN balance for the SourceHub key you are using for your orbis node. You can get tokens from the faucet: https://faucet.source.network/ + + +```bash +wget https://raw.githubusercontent.com/sourcenetwork/networks/15ad45d92e6c012150e764db0c9e7f9e1e25c64f/testnet1/orbis-root-ring/manifest.json + +orbisd -s ring-service create-ring -f /manifest.json +``` + +This will initialze the DKG process and you node will start generating its respective DKG Share. This process can't complete unless *all* nodes are online and syncing. Once completed, the Root Ring will have an initialized DKG, and is ready to start recieving `Store` and `ReEncrypt` requests! + +If we have reached this step together without errors, then we will have completed the Orbis Root Ring setup! + +However, if there are errors experienced along the way, we may need to issue a software update, and restart the process. This is experimental and bleeding edge software, but we hopeful that we can collectively launch the Root Ring without any breaking or blocking issues. \ No newline at end of file diff --git a/docs/orbis/overview.md b/docs/orbis/overview.md new file mode 100644 index 0000000..9b067ee --- /dev/null +++ b/docs/orbis/overview.md @@ -0,0 +1,21 @@ +--- +sidebar_position: 1 +title: Overview +slug: /orbis +--- +# Orbis Overview +![Orbis Overview](/img/orbis/cover.png) + +Orbis is a decentralized Secrets Management engine powered by [Threshold-Proxy ReEncryption](asd) and [Multi-Party Computation](asd) to enable trustless system to manage and share application and user secrets. Application and user secrets can be anything from encryption keys, API tokens, to general small sized messages. This is comparable to other Secrets Management systems like [Hashicorp Vault](vault) but without a single centralized entity. + +## How it works +Deployments of Orbis are called *Secret Rings* which are initialized by a group of nodes, which collectively agree on some starting *Manifest*. These manifest define the initial parameters, such as which kind of authentication/authorization, Proxy Re-encryption, Distributed Key Generation (DKG) algorithms, bulletin protocol, etc are used. + +Once the manifest is created, Secret Ring nodes will start their initial DKG ceremony, which will generate a shared keypair maintained by the nodes, where each node has a share of the private key, and in aggregrate represent a single public key. When a threshold number of nodes (defined by the chosen parameters in the Manifest) collaborate, they are able to execute various kinds of cryptograhic operations. + +Once a Ring is fully configured and initialized users that want to store secrets will encrypt their secret *`S`* to the rings aggregate public key *`Pk`*, *`Se = Enc(S, Pk)`*. They then send *`Se`* to any of the Orbis nodes along with the *Authorization Context* that determines how the configured authorization system processes requests (which is dependant on the `authz` parameter of the Manifest), as a `StoreSecret` request. Finally the receiving node will broadcast the request to the Rings' bulletin protocol to sync with the rest of the nodes. + +:::info +The *Root Ring* is a special deployment Orbis Ring that is a public permissionless service provided by the same infrastructure nodes as the SourceHub publicly hosted service. This ensures there is a single public Source stack deployment that developers can use along side their DefraDB nodes. +::: + diff --git a/docs/orbis/reference/_category_.json b/docs/orbis/reference/_category_.json new file mode 100644 index 0000000..86d0ad2 --- /dev/null +++ b/docs/orbis/reference/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Reference", + "position": 4 +} \ No newline at end of file diff --git a/docs/orbis/reference/config.md b/docs/orbis/reference/config.md new file mode 100644 index 0000000..b53e15d --- /dev/null +++ b/docs/orbis/reference/config.md @@ -0,0 +1,3 @@ +# Config + +> TODO \ No newline at end of file diff --git a/docs/orbis/reference/manifest.md b/docs/orbis/reference/manifest.md new file mode 100644 index 0000000..eb47d64 --- /dev/null +++ b/docs/orbis/reference/manifest.md @@ -0,0 +1,24 @@ +# Manifest + +```json +{ + "manifest": { + "n": 3, // N: The total number of nodes in the ring + "t": 2, // T: The threshold number of nodes to run decryption request (T <= N) + "dkg": "rabin", // DKG: Distributed Key Generation algorithm (available: rabin) + "pss": "avpss", // PSS: Proactive Secret Sharing algorithm (available: avpss) + "pre": "elgamal", // PRE: Proxy Re-Encryption algorithm (available: elgamal) + "bulletin": "p2p", // Bulletin: The network broadcast bulletin (available: p2p, sourcehub) + "transport": "p2p", // Transport: The overlay network transport (available: p2p) + "authentication": "jws-did", // Authentication: The authn scheme for clients (available: jws-did) + "authorization": "zanzi", // Authorization: The authz scheme for clients (available: zanzi, sourcehub) + "nodes": [ // Nodes: Set of node objects identifying the initial nodes. + { + "id":"16Uiu2HAmLyq5HXHSFxvxmGcsnwsYPZvTDtfe3CYnWDK8jDAHhJC5", + "address":"/ip4/127.0.0.1/tcp/9000" + }, + ], + "nonce": 0 // Nonce: Random value to adjust the determinstic RingID creation + } +} +``` \ No newline at end of file diff --git a/docs/references/cli/defradb.md b/docs/references/cli/defradb.md deleted file mode 100644 index 3a38cb5..0000000 --- a/docs/references/cli/defradb.md +++ /dev/null @@ -1,36 +0,0 @@ -# defradb - -DefraDB Edge Database - -## Synopsis - -DefraDB is the edge database to power the user-centric future. - -Start a database node, issue a request to a local or remote node, and much more. - -DefraDB is released under the BSL license, (c) 2022 Democratized Data Foundation. -See https://docs.source.network/BSL.txt for more information. - - -## Options - -``` - -h, --help help for defradb - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client](defradb_client.md) - Interact with a running DefraDB node as a client -* [defradb init](defradb_init.md) - Initialize DefraDB's root directory and configuration file -* [defradb server-dump](defradb_server-dump.md) - Dumps the state of the entire database -* [defradb start](defradb_start.md) - Start a DefraDB node -* [defradb version](defradb_version.md) - Display the version information of DefraDB and its components - diff --git a/docs/references/cli/defradb_client.md b/docs/references/cli/defradb_client.md deleted file mode 100644 index 81656ac..0000000 --- a/docs/references/cli/defradb_client.md +++ /dev/null @@ -1,39 +0,0 @@ -# client - -Interact with a running DefraDB node as a client - -## Synopsis - -Interact with a running DefraDB node as a client. -Execute queries, add schema types, and run debug routines. - -## Options - -``` - -h, --help help for client -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb](defradb.md) - DefraDB Edge Database -* [defradb client blocks](defradb_client_blocks.md) - Interact with the database's blockstore -* [defradb client dump](defradb_client_dump.md) - Dump the contents of a database node-side -* [defradb client peerid](defradb_client_peerid.md) - Get the peer ID of the DefraDB node -* [defradb client ping](defradb_client_ping.md) - Ping to test connection to a node -* [defradb client query](defradb_client_query.md) - Send a DefraDB GraphQL query request -* [defradb client rpc](defradb_client_rpc.md) - Interact with a DefraDB gRPC server -* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a running DefraDB instance - diff --git a/docs/references/cli/defradb_client_blocks.md b/docs/references/cli/defradb_client_blocks.md deleted file mode 100644 index 9f1a50f..0000000 --- a/docs/references/cli/defradb_client_blocks.md +++ /dev/null @@ -1,28 +0,0 @@ -# client blocks - -Interact with the database's blockstore - -## Options - -``` - -h, --help help for blocks -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client](defradb_client.md) - Interact with a running DefraDB node as a client -* [defradb client blocks get](defradb_client_blocks_get.md) - Get a block by its CID from the blockstore. - diff --git a/docs/references/cli/defradb_client_blocks_get.md b/docs/references/cli/defradb_client_blocks_get.md deleted file mode 100644 index 2ddfcb8..0000000 --- a/docs/references/cli/defradb_client_blocks_get.md +++ /dev/null @@ -1,31 +0,0 @@ -# client blocks get - -Get a block by its CID from the blockstore. - -``` -defradb client blocks get [CID] [flags] -``` - -## Options - -``` - -h, --help help for get -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client blocks](defradb_client_blocks.md) - Interact with the database's blockstore - diff --git a/docs/references/cli/defradb_client_dump.md b/docs/references/cli/defradb_client_dump.md deleted file mode 100644 index fdc2a38..0000000 --- a/docs/references/cli/defradb_client_dump.md +++ /dev/null @@ -1,31 +0,0 @@ -# client dump - -Dump the contents of a database node-side - -``` -defradb client dump [flags] -``` - -## Options - -``` - -h, --help help for dump -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client](defradb_client.md) - Interact with a running DefraDB node as a client - diff --git a/docs/references/cli/defradb_client_peerid.md b/docs/references/cli/defradb_client_peerid.md deleted file mode 100644 index cf3f175..0000000 --- a/docs/references/cli/defradb_client_peerid.md +++ /dev/null @@ -1,31 +0,0 @@ -# client peerid - -Get the peer ID of the DefraDB node - -``` -defradb client peerid [flags] -``` - -## Options - -``` - -h, --help help for peerid -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client](defradb_client.md) - Interact with a running DefraDB node as a client - diff --git a/docs/references/cli/defradb_client_ping.md b/docs/references/cli/defradb_client_ping.md deleted file mode 100644 index 6115c5f..0000000 --- a/docs/references/cli/defradb_client_ping.md +++ /dev/null @@ -1,31 +0,0 @@ -# client ping - -Ping to test connection to a node - -``` -defradb client ping [flags] -``` - -## Options - -``` - -h, --help help for ping -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client](defradb_client.md) - Interact with a running DefraDB node as a client - diff --git a/docs/references/cli/defradb_client_query.md b/docs/references/cli/defradb_client_query.md deleted file mode 100644 index b602633..0000000 --- a/docs/references/cli/defradb_client_query.md +++ /dev/null @@ -1,46 +0,0 @@ -# client query - -Send a DefraDB GraphQL query request - -## Synopsis - -Send a DefraDB GraphQL query request to the database. - -A query request can be sent as a single argument. Example command: -defradb client query 'query { ... }' - -Or it can be sent via stdin by using the '-' special syntax. Example command: -cat request.graphql | defradb client query - - -A GraphQL client such as GraphiQL (https://github.com/graphql/graphiql) can be used to interact -with the database more conveniently. - -To learn more about the DefraDB GraphQL Query Language, refer to https://docs.source.network. - -``` -defradb client query [query request] [flags] -``` - -## Options - -``` - -h, --help help for query -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client](defradb_client.md) - Interact with a running DefraDB node as a client - diff --git a/docs/references/cli/defradb_client_rpc.md b/docs/references/cli/defradb_client_rpc.md deleted file mode 100644 index 1044e78..0000000 --- a/docs/references/cli/defradb_client_rpc.md +++ /dev/null @@ -1,34 +0,0 @@ -# client rpc - -Interact with a DefraDB gRPC server - -## Synopsis - -Interact with a DefraDB gRPC server. - -## Options - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - -h, --help help for rpc -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client](defradb_client.md) - Interact with a running DefraDB node as a client -* [defradb client rpc p2pcollection](defradb_client_rpc_p2pcollection.md) - Interact with the P2P collection system -* [defradb client rpc replicator](defradb_client_rpc_replicator.md) - Interact with the replicator system - diff --git a/docs/references/cli/defradb_client_rpc_addreplicator.md b/docs/references/cli/defradb_client_rpc_addreplicator.md deleted file mode 100644 index a7c5c9f..0000000 --- a/docs/references/cli/defradb_client_rpc_addreplicator.md +++ /dev/null @@ -1,37 +0,0 @@ -# client rpc addreplicator - -Add a new replicator - -## Synopsis - -Use this command if you wish to add a new target replicator -for the p2p data sync system. - -``` -defradb client rpc addreplicator [flags] -``` - -## Options - -``` - -h, --help help for addreplicator -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc](defradb_client_rpc.md) - Interact with a DefraDB gRPC server - diff --git a/docs/references/cli/defradb_client_rpc_p2pcollection.md b/docs/references/cli/defradb_client_rpc_p2pcollection.md deleted file mode 100644 index 37edd5e..0000000 --- a/docs/references/cli/defradb_client_rpc_p2pcollection.md +++ /dev/null @@ -1,35 +0,0 @@ -# client rpc p2pcollection - -Interact with the P2P collection system - -## Synopsis - -Add, delete, or get the list of P2P collections - -## Options - -``` - -h, --help help for p2pcollection -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc](defradb_client_rpc.md) - Interact with a DefraDB gRPC server -* [defradb client rpc p2pcollection add](defradb_client_rpc_p2pcollection_add.md) - Add P2P collections -* [defradb client rpc p2pcollection getall](defradb_client_rpc_p2pcollection_getall.md) - Get all P2P collections -* [defradb client rpc p2pcollection remove](defradb_client_rpc_p2pcollection_remove.md) - Add P2P collections - diff --git a/docs/references/cli/defradb_client_rpc_p2pcollection_add.md b/docs/references/cli/defradb_client_rpc_p2pcollection_add.md deleted file mode 100644 index 902ff41..0000000 --- a/docs/references/cli/defradb_client_rpc_p2pcollection_add.md +++ /dev/null @@ -1,36 +0,0 @@ -# client rpc p2pcollection add - -Add P2P collections - -## Synopsis - -Use this command if you wish to add new P2P collections to the pubsub topics - -``` -defradb client rpc p2pcollection add [collectionID] [flags] -``` - -## Options - -``` - -h, --help help for add -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc p2pcollection](defradb_client_rpc_p2pcollection.md) - Interact with the P2P collection system - diff --git a/docs/references/cli/defradb_client_rpc_p2pcollection_getall.md b/docs/references/cli/defradb_client_rpc_p2pcollection_getall.md deleted file mode 100644 index 92d5337..0000000 --- a/docs/references/cli/defradb_client_rpc_p2pcollection_getall.md +++ /dev/null @@ -1,36 +0,0 @@ -# client rpc p2pcollection getall - -Get all P2P collections - -## Synopsis - -Use this command if you wish to get all P2P collections in the pubsub topics - -``` -defradb client rpc p2pcollection getall [flags] -``` - -## Options - -``` - -h, --help help for getall -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc p2pcollection](defradb_client_rpc_p2pcollection.md) - Interact with the P2P collection system - diff --git a/docs/references/cli/defradb_client_rpc_p2pcollection_remove.md b/docs/references/cli/defradb_client_rpc_p2pcollection_remove.md deleted file mode 100644 index 9f8214d..0000000 --- a/docs/references/cli/defradb_client_rpc_p2pcollection_remove.md +++ /dev/null @@ -1,36 +0,0 @@ -# client rpc p2pcollection remove - -Add P2P collections - -## Synopsis - -Use this command if you wish to remove P2P collections from the pubsub topics - -``` -defradb client rpc p2pcollection remove [collectionID] [flags] -``` - -## Options - -``` - -h, --help help for remove -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc p2pcollection](defradb_client_rpc_p2pcollection.md) - Interact with the P2P collection system - diff --git a/docs/references/cli/defradb_client_rpc_replicator.md b/docs/references/cli/defradb_client_rpc_replicator.md deleted file mode 100644 index cdb87fa..0000000 --- a/docs/references/cli/defradb_client_rpc_replicator.md +++ /dev/null @@ -1,37 +0,0 @@ -# client rpc replicator - -Interact with the replicator system - -## Synopsis - -Add, delete, or get the list of persisted replicators - -## Options - -``` - -c, --collection stringArray Define the collection for the replicator - -f, --full Set the replicator to act on all collections - -h, --help help for replicator -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc](defradb_client_rpc.md) - Interact with a DefraDB gRPC server -* [defradb client rpc replicator delete](defradb_client_rpc_replicator_delete.md) - Delete a replicator -* [defradb client rpc replicator getall](defradb_client_rpc_replicator_getall.md) - Get all replicators -* [defradb client rpc replicator set](defradb_client_rpc_replicator_set.md) - Set a P2P replicator - diff --git a/docs/references/cli/defradb_client_rpc_replicator_delete.md b/docs/references/cli/defradb_client_rpc_replicator_delete.md deleted file mode 100644 index 392481b..0000000 --- a/docs/references/cli/defradb_client_rpc_replicator_delete.md +++ /dev/null @@ -1,37 +0,0 @@ -# client rpc replicator delete - -Delete a replicator - -## Synopsis - -Use this command if you wish to remove the target replicator -for the p2p data sync system. - -``` -defradb client rpc replicator delete [-f, --full | -c, --collection] [flags] -``` - -## Options - -``` - -h, --help help for delete -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc replicator](defradb_client_rpc_replicator.md) - Interact with the replicator system - diff --git a/docs/references/cli/defradb_client_rpc_replicator_getall.md b/docs/references/cli/defradb_client_rpc_replicator_getall.md deleted file mode 100644 index 79d891a..0000000 --- a/docs/references/cli/defradb_client_rpc_replicator_getall.md +++ /dev/null @@ -1,37 +0,0 @@ -# client rpc replicator getall - -Get all replicators - -## Synopsis - -Use this command if you wish to get all the replicators -for the p2p data sync system. - -``` -defradb client rpc replicator getall [flags] -``` - -## Options - -``` - -h, --help help for getall -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc replicator](defradb_client_rpc_replicator.md) - Interact with the replicator system - diff --git a/docs/references/cli/defradb_client_rpc_replicator_set.md b/docs/references/cli/defradb_client_rpc_replicator_set.md deleted file mode 100644 index 5b94f1a..0000000 --- a/docs/references/cli/defradb_client_rpc_replicator_set.md +++ /dev/null @@ -1,39 +0,0 @@ -# client rpc replicator set - -Set a P2P replicator - -## Synopsis - -Use this command if you wish to add a new target replicator -for the p2p data sync system or add schemas to an existing one - -``` -defradb client rpc replicator set [-f, --full | -c, --collection] [flags] -``` - -## Options - -``` - -c, --collection stringArray Define the collection for the replicator - -f, --full Set the replicator to act on all collections - -h, --help help for set -``` - -## Options inherited from parent commands - -``` - --addr string gRPC endpoint address (default "0.0.0.0:9161") - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client rpc replicator](defradb_client_rpc_replicator.md) - Interact with the replicator system - diff --git a/docs/references/cli/defradb_client_schema.md b/docs/references/cli/defradb_client_schema.md deleted file mode 100644 index 140c7fe..0000000 --- a/docs/references/cli/defradb_client_schema.md +++ /dev/null @@ -1,33 +0,0 @@ -# client schema - -Interact with the schema system of a running DefraDB instance - -## Synopsis - -Make changes, updates, or look for existing schema types to a DefraDB node. - -## Options - -``` - -h, --help help for schema -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client](defradb_client.md) - Interact with a running DefraDB node as a client -* [defradb client schema add](defradb_client_schema_add.md) - Add a new schema type to DefraDB -* [defradb client schema patch](defradb_client_schema_patch.md) - Patch an existing schema type - diff --git a/docs/references/cli/defradb_client_schema_add.md b/docs/references/cli/defradb_client_schema_add.md deleted file mode 100644 index 0909eb5..0000000 --- a/docs/references/cli/defradb_client_schema_add.md +++ /dev/null @@ -1,47 +0,0 @@ -# client schema add - -Add a new schema type to DefraDB - -## Synopsis - -Add a new schema type to DefraDB. - -Example: add from an argument string: - defradb client schema add 'type Foo { ... }' - -Example: add from file: - defradb client schema add -f schema.graphql - -Example: add from stdin: - cat schema.graphql | defradb client schema add - - -To learn more about the DefraDB GraphQL Schema Language, refer to https://docs.source.network. - -``` -defradb client schema add [schema] [flags] -``` - -## Options - -``` - -f, --file string File to load a schema from - -h, --help help for add -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a running DefraDB instance - diff --git a/docs/references/cli/defradb_client_schema_patch.md b/docs/references/cli/defradb_client_schema_patch.md deleted file mode 100644 index 307e7b8..0000000 --- a/docs/references/cli/defradb_client_schema_patch.md +++ /dev/null @@ -1,49 +0,0 @@ -# client schema patch - -Patch an existing schema type - -## Synopsis - -Patch an existing schema. - -Uses JSON PATCH formatting as a DDL. - -Example: patch from an argument string: - defradb client schema patch '[{ "op": "add", "path": "...", "value": {...} }]' - -Example: patch from file: - defradb client schema patch -f patch.json - -Example: patch from stdin: - cat patch.json | defradb client schema patch - - -To learn more about the DefraDB GraphQL Schema Language, refer to https://docs.source.network. - -``` -defradb client schema patch [schema] [flags] -``` - -## Options - -``` - -f, --file string File to load a patch from - -h, --help help for patch -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb client schema](defradb_client_schema.md) - Interact with the schema system of a running DefraDB instance - diff --git a/docs/references/cli/defradb_init.md b/docs/references/cli/defradb_init.md deleted file mode 100644 index 5b7f207..0000000 --- a/docs/references/cli/defradb_init.md +++ /dev/null @@ -1,36 +0,0 @@ -# init - -Initialize DefraDB's root directory and configuration file - -## Synopsis - -Initialize a directory for configuration and data at the given path. - -``` -defradb init [flags] -``` - -## Options - -``` - -h, --help help for init - --reinitialize Reinitialize the configuration file - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb](defradb.md) - DefraDB Edge Database - diff --git a/docs/references/cli/defradb_server-dump.md b/docs/references/cli/defradb_server-dump.md deleted file mode 100644 index 91641d1..0000000 --- a/docs/references/cli/defradb_server-dump.md +++ /dev/null @@ -1,32 +0,0 @@ -# server-dump - -Dumps the state of the entire database - -``` -defradb server-dump [flags] -``` - -## Options - -``` - -h, --help help for server-dump - --store string Datastore to use. Options are badger, memory (default "badger") -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb](defradb.md) - DefraDB Edge Database - diff --git a/docs/references/cli/defradb_start.md b/docs/references/cli/defradb_start.md deleted file mode 100644 index d393f7f..0000000 --- a/docs/references/cli/defradb_start.md +++ /dev/null @@ -1,46 +0,0 @@ -# start - -Start a DefraDB node - -## Synopsis - -Start a new instance of DefraDB node. - -``` -defradb start [flags] -``` - -## Options - -``` - --email string Email address used by the CA for notifications (default "example@example.com") - -h, --help help for start - --max-txn-retries int Specify the maximum number of retries per transaction (default 5) - --no-p2p Disable the peer-to-peer network synchronization system - --p2paddr string Listener address for the p2p network (formatted as a libp2p MultiAddr) (default "/ip4/0.0.0.0/tcp/9171") - --peers string List of peers to connect to - --privkeypath string Path to the private key for tls (default "certs/server.crt") - --pubkeypath string Path to the public key for tls (default "certs/server.key") - --store string Specify the datastore to use (supported: badger, memory) (default "badger") - --tcpaddr string Listener address for the tcp gRPC server (formatted as a libp2p MultiAddr) (default "/ip4/0.0.0.0/tcp/9161") - --tls Enable serving the API over https - --valuelogfilesize ByteSize Specify the datastore value log file size (in bytes). In memory size will be 2*valuelogfilesize (default 1GiB) -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb](defradb.md) - DefraDB Edge Database - diff --git a/docs/references/cli/defradb_version.md b/docs/references/cli/defradb_version.md deleted file mode 100644 index de817ad..0000000 --- a/docs/references/cli/defradb_version.md +++ /dev/null @@ -1,33 +0,0 @@ -# version - -Display the version information of DefraDB and its components - -``` -defradb version [flags] -``` - -## Options - -``` - -f, --format string Version output format. Options are text, json - --full Display the full version information - -h, --help help for version -``` - -## Options inherited from parent commands - -``` - --logformat string Log format to use. Options are csv, json (default "csv") - --logger stringArray Override logger parameters. Usage: --logger ,level=,output=,... - --loglevel string Log level to use. Options are debug, info, error, fatal (default "info") - --lognocolor Disable colored log output - --logoutput string Log output path (default "stderr") - --logtrace Include stacktrace in error and fatal logs - --rootdir string Directory for data and configuration to use (default "$HOME/.defradb") - --url string URL of HTTP endpoint to listen on or connect to (default "localhost:9181") -``` - -## SEE ALSO - -* [defradb](defradb.md) - DefraDB Edge Database - diff --git a/docs/release-notes.md b/docs/release-notes.md deleted file mode 100644 index ead4fc1..0000000 --- a/docs/release-notes.md +++ /dev/null @@ -1,633 +0,0 @@ ---- -sidebar_position: 5 -title: Release Notes ---- - -# Release Notes -## [v0.5.0](https://github.com/sourcenetwork/defradb/compare/v0.4.0...v0.5.0) - -> 2023-04-12 - -DefraDB v0.5 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. - -There many new features in this release, but most importantly, this is the first open source release for DefraDB. As such, this release focused on various quality of life changes and refactors, bug fixes, and overall cleanliness of the repo so it can effectively be used and tested in the public domain. - -To get a full outline of the changes, we invite you to review the official changelog below. Some highlights are the first iteration of our schema update system, allowing developers to add new fields to schemas using our JSON Patch based DDL, a new DAG based delete system which will persist "soft-delete" ops into the CRDT Merkle DAG, and a early prototype for our collection level peer-to-peer synchronization. - -This release does include a Breaking Change to existing v0.4.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. - -### Features - -* Add document delete mechanics ([#1263](https://github.com/sourcenetwork/defradb/issues/1263)) -* Ability to explain an executed request ([#1188](https://github.com/sourcenetwork/defradb/issues/1188)) -* Add SchemaPatch CLI command ([#1250](https://github.com/sourcenetwork/defradb/issues/1250)) -* Add support for one-one mutation from sec. side ([#1247](https://github.com/sourcenetwork/defradb/issues/1247)) -* Store only key in DAG instead of dockey path ([#1245](https://github.com/sourcenetwork/defradb/issues/1245)) -* Add collectionId field to commit field ([#1235](https://github.com/sourcenetwork/defradb/issues/1235)) -* Add field kind substitution for PatchSchema ([#1223](https://github.com/sourcenetwork/defradb/issues/1223)) -* Add dockey field for commit field ([#1216](https://github.com/sourcenetwork/defradb/issues/1216)) -* Allow new fields to be added locally to schema ([#1139](https://github.com/sourcenetwork/defradb/issues/1139)) -* Add `like` sub-string filter ([#1091](https://github.com/sourcenetwork/defradb/issues/1091)) -* Add ability for P2P to wait for pushlog by peer ([#1098](https://github.com/sourcenetwork/defradb/issues/1098)) -* Add P2P collection topic subscription ([#1086](https://github.com/sourcenetwork/defradb/issues/1086)) -* Add support for schema version id in queries ([#1067](https://github.com/sourcenetwork/defradb/issues/1067)) -* Add schema version id to commit queries ([#1061](https://github.com/sourcenetwork/defradb/issues/1061)) -* Persist schema version at time of commit ([#1055](https://github.com/sourcenetwork/defradb/issues/1055)) -* Add ability to input simple explain type arg ([#1039](https://github.com/sourcenetwork/defradb/issues/1039)) - -### Fixes - -* API address parameter validation ([#1311](https://github.com/sourcenetwork/defradb/issues/1311)) -* Improve error message for NonNull GQL types ([#1333](https://github.com/sourcenetwork/defradb/issues/1333)) -* Handle panics in the rpc server ([#1330](https://github.com/sourcenetwork/defradb/issues/1330)) -* Handle returned error in select.go ([#1329](https://github.com/sourcenetwork/defradb/issues/1329)) -* Resolve handful of CLI issues ([#1318](https://github.com/sourcenetwork/defradb/issues/1318)) -* Only check for events queue on subscription request ([#1326](https://github.com/sourcenetwork/defradb/issues/1326)) -* Remove client Create/UpdateCollection ([#1309](https://github.com/sourcenetwork/defradb/issues/1309)) -* CLI to display specific command usage help ([#1314](https://github.com/sourcenetwork/defradb/issues/1314)) -* Fix P2P collection CLI commands ([#1295](https://github.com/sourcenetwork/defradb/issues/1295)) -* Dont double up badger file path ([#1299](https://github.com/sourcenetwork/defradb/issues/1299)) -* Update immutable package ([#1290](https://github.com/sourcenetwork/defradb/issues/1290)) -* Fix panic on success of Add/RemoveP2PCollections ([#1297](https://github.com/sourcenetwork/defradb/issues/1297)) -* Fix deadlock on memory-datastore Close ([#1273](https://github.com/sourcenetwork/defradb/issues/1273)) -* Determine if query is introspection query ([#1255](https://github.com/sourcenetwork/defradb/issues/1255)) -* Allow newly added fields to sync via p2p ([#1226](https://github.com/sourcenetwork/defradb/issues/1226)) -* Expose `ExplainEnum` in the GQL schema ([#1204](https://github.com/sourcenetwork/defradb/issues/1204)) -* Resolve aggregates' mapping with deep nested subtypes ([#1175](https://github.com/sourcenetwork/defradb/issues/1175)) -* Make sort stable and handle nil comparison ([#1094](https://github.com/sourcenetwork/defradb/issues/1094)) -* Change successful schema add status to 200 ([#1106](https://github.com/sourcenetwork/defradb/issues/1106)) -* Add delay in P2P test util execution ([#1093](https://github.com/sourcenetwork/defradb/issues/1093)) -* Ensure errors test don't hard expect folder name ([#1072](https://github.com/sourcenetwork/defradb/issues/1072)) -* Remove potential P2P deadlock ([#1056](https://github.com/sourcenetwork/defradb/issues/1056)) -* Rework the P2P integration tests ([#989](https://github.com/sourcenetwork/defradb/issues/989)) -* Improve DAG sync with highly concurrent updates ([#1031](https://github.com/sourcenetwork/defradb/issues/1031)) - -### Documentation - -* Update docs for the v0.5 release ([#1320](https://github.com/sourcenetwork/defradb/issues/1320)) -* Document client interfaces in client/db.go ([#1305](https://github.com/sourcenetwork/defradb/issues/1305)) -* Document client Description types ([#1307](https://github.com/sourcenetwork/defradb/issues/1307)) -* Improve security policy ([#1240](https://github.com/sourcenetwork/defradb/issues/1240)) -* Add security disclosure policy ([#1194](https://github.com/sourcenetwork/defradb/issues/1194)) -* Correct commits query example in readme ([#1172](https://github.com/sourcenetwork/defradb/issues/1172)) - -### Refactoring - -* Improve p2p collection operations on peer ([#1286](https://github.com/sourcenetwork/defradb/issues/1286)) -* Migrate gql introspection tests to new framework ([#1211](https://github.com/sourcenetwork/defradb/issues/1211)) -* Reorganise client transaction related interfaces ([#1180](https://github.com/sourcenetwork/defradb/issues/1180)) -* Config-local viper, rootdir, and logger parsing ([#1132](https://github.com/sourcenetwork/defradb/issues/1132)) -* Migrate mutation-relation tests to new framework ([#1109](https://github.com/sourcenetwork/defradb/issues/1109)) -* Rework integration test framework ([#1089](https://github.com/sourcenetwork/defradb/issues/1089)) -* Generate gql types using col. desc ([#1080](https://github.com/sourcenetwork/defradb/issues/1080)) -* Extract config errors to dedicated file ([#1107](https://github.com/sourcenetwork/defradb/issues/1107)) -* Change terminology from query to request ([#1054](https://github.com/sourcenetwork/defradb/issues/1054)) -* Allow db keys to handle multiple schema versions ([#1026](https://github.com/sourcenetwork/defradb/issues/1026)) -* Extract query schema errors to dedicated file ([#1037](https://github.com/sourcenetwork/defradb/issues/1037)) -* Extract planner errors to dedicated file ([#1034](https://github.com/sourcenetwork/defradb/issues/1034)) -* Extract query parser errors to dedicated file ([#1035](https://github.com/sourcenetwork/defradb/issues/1035)) - -### Testing - -* Remove test reference to DEFRA_ROOTDIR env var ([#1328](https://github.com/sourcenetwork/defradb/issues/1328)) -* Expand tests for Peer subscribe actions ([#1287](https://github.com/sourcenetwork/defradb/issues/1287)) -* Fix flaky TestCloseThroughContext test ([#1265](https://github.com/sourcenetwork/defradb/issues/1265)) -* Add gql introspection tests for patch schema ([#1219](https://github.com/sourcenetwork/defradb/issues/1219)) -* Explicitly state change detector split for test ([#1228](https://github.com/sourcenetwork/defradb/issues/1228)) -* Add test for successful one-one create mutation ([#1215](https://github.com/sourcenetwork/defradb/issues/1215)) -* Ensure that all databases are always closed on exit ([#1187](https://github.com/sourcenetwork/defradb/issues/1187)) -* Add P2P tests for Schema Update adding field ([#1182](https://github.com/sourcenetwork/defradb/issues/1182)) -* Migrate P2P/state tests to new framework ([#1160](https://github.com/sourcenetwork/defradb/issues/1160)) -* Remove sleep from subscription tests ([#1156](https://github.com/sourcenetwork/defradb/issues/1156)) -* Fetch documents on test execution start ([#1163](https://github.com/sourcenetwork/defradb/issues/1163)) -* Introduce basic testing for the `version` module ([#1111](https://github.com/sourcenetwork/defradb/issues/1111)) -* Boost test coverage for collection_update ([#1050](https://github.com/sourcenetwork/defradb/issues/1050)) -* Wait between P2P update retry attempts ([#1052](https://github.com/sourcenetwork/defradb/issues/1052)) -* Exclude auto-generated protobuf files from codecov ([#1048](https://github.com/sourcenetwork/defradb/issues/1048)) -* Add P2P tests for relational docs ([#1042](https://github.com/sourcenetwork/defradb/issues/1042)) - -### Continuous integration - -* Add workflow that builds DefraDB AMI upon tag push ([#1304](https://github.com/sourcenetwork/defradb/issues/1304)) -* Allow PR title to end with a capital letter ([#1291](https://github.com/sourcenetwork/defradb/issues/1291)) -* Changes for `dependabot` to be well-behaved ([#1165](https://github.com/sourcenetwork/defradb/issues/1165)) -* Skip benchmarks for dependabot ([#1144](https://github.com/sourcenetwork/defradb/issues/1144)) -* Add workflow to ensure deps build properly ([#1078](https://github.com/sourcenetwork/defradb/issues/1078)) -* Runner and Builder Containerfiles ([#951](https://github.com/sourcenetwork/defradb/issues/951)) -* Fix go-header linter rule to be any year ([#1021](https://github.com/sourcenetwork/defradb/issues/1021)) - -### Chore - -* Add Islam as contributor ([#1302](https://github.com/sourcenetwork/defradb/issues/1302)) -* Update go-libp2p to 0.26.4 ([#1257](https://github.com/sourcenetwork/defradb/issues/1257)) -* Improve the test coverage of datastore ([#1203](https://github.com/sourcenetwork/defradb/issues/1203)) -* Add issue and discussion templates ([#1193](https://github.com/sourcenetwork/defradb/issues/1193)) -* Bump libp2p/go-libp2p-kad-dht from 0.21.0 to 0.21.1 ([#1146](https://github.com/sourcenetwork/defradb/issues/1146)) -* Enable dependabot ([#1120](https://github.com/sourcenetwork/defradb/issues/1120)) -* Update `opentelemetry` dependencies ([#1114](https://github.com/sourcenetwork/defradb/issues/1114)) -* Update dependencies including go-ipfs ([#1112](https://github.com/sourcenetwork/defradb/issues/1112)) -* Bump to GoLang v1.19 ([#818](https://github.com/sourcenetwork/defradb/issues/818)) -* Remove versionedScan node ([#1049](https://github.com/sourcenetwork/defradb/issues/1049)) - -### Bot - -* Bump github.com/multiformats/go-multiaddr from 0.8.0 to 0.9.0 ([#1277](https://github.com/sourcenetwork/defradb/issues/1277)) -* Bump google.golang.org/grpc from 1.53.0 to 1.54.0 ([#1233](https://github.com/sourcenetwork/defradb/issues/1233)) -* Bump github.com/multiformats/go-multibase from 0.1.1 to 0.2.0 ([#1230](https://github.com/sourcenetwork/defradb/issues/1230)) -* Bump github.com/ipfs/go-libipfs from 0.6.2 to 0.7.0 ([#1231](https://github.com/sourcenetwork/defradb/issues/1231)) -* Bump github.com/ipfs/go-cid from 0.3.2 to 0.4.0 ([#1200](https://github.com/sourcenetwork/defradb/issues/1200)) -* Bump github.com/ipfs/go-ipfs-blockstore from 1.2.0 to 1.3.0 ([#1199](https://github.com/sourcenetwork/defradb/issues/1199)) -* Bump github.com/stretchr/testify from 1.8.1 to 1.8.2 ([#1198](https://github.com/sourcenetwork/defradb/issues/1198)) -* Bump github.com/ipfs/go-libipfs from 0.6.1 to 0.6.2 ([#1201](https://github.com/sourcenetwork/defradb/issues/1201)) -* Bump golang.org/x/crypto from 0.6.0 to 0.7.0 ([#1197](https://github.com/sourcenetwork/defradb/issues/1197)) -* Bump libp2p/go-libp2p-gostream from 0.5.0 to 0.6.0 ([#1152](https://github.com/sourcenetwork/defradb/issues/1152)) -* Bump github.com/ipfs/go-libipfs from 0.5.0 to 0.6.1 ([#1166](https://github.com/sourcenetwork/defradb/issues/1166)) -* Bump github.com/ugorji/go/codec from 1.2.9 to 1.2.11 ([#1173](https://github.com/sourcenetwork/defradb/issues/1173)) -* Bump github.com/libp2p/go-libp2p-pubsub from 0.9.0 to 0.9.3 ([#1183](https://github.com/sourcenetwork/defradb/issues/1183)) - - -## [v0.4.0](https://github.com/sourcenetwork/defradb/compare/v0.3.1...v0.4.0) - -> 2023-12-23 - -DefraDB v0.4 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. - -There are various new features in this release - some of which are breaking - and we invite you to review the official changelog below. Some highlights are persistence of replicators, DateTime scalars, TLS support, and GQL subscriptions. - -This release does include a Breaking Change to existing v0.3.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. - -### Features - -* Add basic metric functionality ([#971](https://github.com/sourcenetwork/defradb/issues/971)) -* Add thread safe transactional in-memory datastore ([#947](https://github.com/sourcenetwork/defradb/issues/947)) -* Persist p2p replicators ([#960](https://github.com/sourcenetwork/defradb/issues/960)) -* Add DateTime custom scalars ([#931](https://github.com/sourcenetwork/defradb/issues/931)) -* Add GraphQL subscriptions ([#934](https://github.com/sourcenetwork/defradb/issues/934)) -* Add support for tls ([#885](https://github.com/sourcenetwork/defradb/issues/885)) -* Add group by support for commits ([#887](https://github.com/sourcenetwork/defradb/issues/887)) -* Add depth support for commits ([#889](https://github.com/sourcenetwork/defradb/issues/889)) -* Make dockey optional for allCommits queries ([#847](https://github.com/sourcenetwork/defradb/issues/847)) -* Add WithStack to the errors package ([#870](https://github.com/sourcenetwork/defradb/issues/870)) -* Add event system ([#834](https://github.com/sourcenetwork/defradb/issues/834)) - -### Fixes - -* Correct errors.WithStack behaviour ([#984](https://github.com/sourcenetwork/defradb/issues/984)) -* Correctly handle nested one to one joins ([#964](https://github.com/sourcenetwork/defradb/issues/964)) -* Do not assume parent record exists when joining ([#963](https://github.com/sourcenetwork/defradb/issues/963)) -* Change time format for HTTP API log ([#910](https://github.com/sourcenetwork/defradb/issues/910)) -* Error if group select contains non-group-by fields ([#898](https://github.com/sourcenetwork/defradb/issues/898)) -* Add inspection of values for ENV flags ([#900](https://github.com/sourcenetwork/defradb/issues/900)) -* Remove panics from document ([#881](https://github.com/sourcenetwork/defradb/issues/881)) -* Add __typename support ([#871](https://github.com/sourcenetwork/defradb/issues/871)) -* Handle subscriber close ([#877](https://github.com/sourcenetwork/defradb/issues/877)) -* Publish update events post commit ([#866](https://github.com/sourcenetwork/defradb/issues/866)) - -### Refactoring - -* Make rootstore require Batching and TxnDatastore ([#940](https://github.com/sourcenetwork/defradb/issues/940)) -* Conceptually clarify schema vs query-language ([#924](https://github.com/sourcenetwork/defradb/issues/924)) -* Decouple db.db from gql ([#912](https://github.com/sourcenetwork/defradb/issues/912)) -* Merkle clock heads cleanup ([#918](https://github.com/sourcenetwork/defradb/issues/918)) -* Simplify dag fetcher ([#913](https://github.com/sourcenetwork/defradb/issues/913)) -* Cleanup parsing logic ([#909](https://github.com/sourcenetwork/defradb/issues/909)) -* Move planner outside the gql directory ([#907](https://github.com/sourcenetwork/defradb/issues/907)) -* Refactor commit nodes ([#892](https://github.com/sourcenetwork/defradb/issues/892)) -* Make latest commits syntax sugar ([#890](https://github.com/sourcenetwork/defradb/issues/890)) -* Remove commit query ([#841](https://github.com/sourcenetwork/defradb/issues/841)) - -### Testing - -* Add event tests ([#965](https://github.com/sourcenetwork/defradb/issues/965)) -* Add new setup for testing explain functionality ([#949](https://github.com/sourcenetwork/defradb/issues/949)) -* Add txn relation-type delete and create tests ([#875](https://github.com/sourcenetwork/defradb/issues/875)) -* Skip change detection for tests that assert panic ([#883](https://github.com/sourcenetwork/defradb/issues/883)) - -### Continuous integration - -* Bump all gh-action versions to support node16 ([#990](https://github.com/sourcenetwork/defradb/issues/990)) -* Bump ssh-agent action to v0.7.0 ([#978](https://github.com/sourcenetwork/defradb/issues/978)) -* Add error message format check ([#901](https://github.com/sourcenetwork/defradb/issues/901)) - -### Chore - -* Extract (events, merkle) errors to errors.go ([#973](https://github.com/sourcenetwork/defradb/issues/973)) -* Extract (datastore, db) errors to errors.go ([#969](https://github.com/sourcenetwork/defradb/issues/969)) -* Extract (connor, crdt, core) errors to errors.go ([#968](https://github.com/sourcenetwork/defradb/issues/968)) -* Extract inline (http and client) errors to errors.go ([#967](https://github.com/sourcenetwork/defradb/issues/967)) -* Update badger version ([#966](https://github.com/sourcenetwork/defradb/issues/966)) -* Move Option and Enumerable to immutables ([#939](https://github.com/sourcenetwork/defradb/issues/939)) -* Add configuration of external loggers ([#942](https://github.com/sourcenetwork/defradb/issues/942)) -* Strip DSKey prefixes and simplify NewDataStoreKey ([#944](https://github.com/sourcenetwork/defradb/issues/944)) -* Include version metadata in cross-building ([#930](https://github.com/sourcenetwork/defradb/issues/930)) -* Update to v0.23.2 the libP2P package ([#908](https://github.com/sourcenetwork/defradb/issues/908)) -* Remove `ipfslite` dependency ([#739](https://github.com/sourcenetwork/defradb/issues/739)) - - - -## [v0.3.1](https://github.com/sourcenetwork/defradb/compare/v0.3.0...v0.3.1) - -> 2022-09-23 - -DefraDB v0.3.1 is a minor release, primarily focusing on additional/extended features and fixes of items added in the `v0.3.0` release. - -### Features - -* Add cid support for allCommits ([#857](https://github.com/sourcenetwork/defradb/issues/857)) -* Add offset support to allCommits ([#859](https://github.com/sourcenetwork/defradb/issues/859)) -* Add limit support to allCommits query ([#856](https://github.com/sourcenetwork/defradb/issues/856)) -* Add order support to allCommits ([#845](https://github.com/sourcenetwork/defradb/issues/845)) -* Display CLI usage on user error ([#819](https://github.com/sourcenetwork/defradb/issues/819)) -* Add support for dockey filters in child joins ([#806](https://github.com/sourcenetwork/defradb/issues/806)) -* Add sort support for numeric aggregates ([#786](https://github.com/sourcenetwork/defradb/issues/786)) -* Allow filtering by nil ([#789](https://github.com/sourcenetwork/defradb/issues/789)) -* Add aggregate offset support ([#778](https://github.com/sourcenetwork/defradb/issues/778)) -* Remove filter depth limit ([#777](https://github.com/sourcenetwork/defradb/issues/777)) -* Add support for and-or inline array aggregate filters ([#779](https://github.com/sourcenetwork/defradb/issues/779)) -* Add limit support for aggregates ([#771](https://github.com/sourcenetwork/defradb/issues/771)) -* Add support for inline arrays of nillable types ([#759](https://github.com/sourcenetwork/defradb/issues/759)) -* Create errors package ([#548](https://github.com/sourcenetwork/defradb/issues/548)) -* Add ability to display peer id ([#719](https://github.com/sourcenetwork/defradb/issues/719)) -* Add a config option to set the vlog max file size ([#743](https://github.com/sourcenetwork/defradb/issues/743)) -* Explain `topLevelNode` like a `MultiNode` plan ([#749](https://github.com/sourcenetwork/defradb/issues/749)) -* Make `topLevelNode` explainable ([#737](https://github.com/sourcenetwork/defradb/issues/737)) - -### Fixes - -* Order subtype without selecting the join child ([#810](https://github.com/sourcenetwork/defradb/issues/810)) -* Correctly handles nil one-one joins ([#837](https://github.com/sourcenetwork/defradb/issues/837)) -* Reset scan node for each join ([#828](https://github.com/sourcenetwork/defradb/issues/828)) -* Handle filter input field argument being nil ([#787](https://github.com/sourcenetwork/defradb/issues/787)) -* Ensure CLI outputs JSON to stdout when directed to pipe ([#804](https://github.com/sourcenetwork/defradb/issues/804)) -* Error if given the wrong side of a one-one relationship ([#795](https://github.com/sourcenetwork/defradb/issues/795)) -* Add object marker to enable return of empty docs ([#800](https://github.com/sourcenetwork/defradb/issues/800)) -* Resolve the extra `typeIndexJoin`s for `_avg` aggregate ([#774](https://github.com/sourcenetwork/defradb/issues/774)) -* Remove _like filter operator ([#797](https://github.com/sourcenetwork/defradb/issues/797)) -* Remove having gql types ([#785](https://github.com/sourcenetwork/defradb/issues/785)) -* Error if child _group selected without parent groupBy ([#781](https://github.com/sourcenetwork/defradb/issues/781)) -* Error nicely on missing field specifier ([#782](https://github.com/sourcenetwork/defradb/issues/782)) -* Handle order input field argument being nil ([#701](https://github.com/sourcenetwork/defradb/issues/701)) -* Change output to outputpath in config file template for logger ([#716](https://github.com/sourcenetwork/defradb/issues/716)) -* Delete mutations not correct persisting all keys ([#731](https://github.com/sourcenetwork/defradb/issues/731)) - -### Tooling - -* Ban the usage of `ioutil` package ([#747](https://github.com/sourcenetwork/defradb/issues/747)) -* Migrate from CircleCi to GitHub Actions ([#679](https://github.com/sourcenetwork/defradb/issues/679)) - -### Documentation - -* Clarify meaning of url param, update in-repo CLI docs ([#814](https://github.com/sourcenetwork/defradb/issues/814)) -* Disclaimer of exposed to network and not encrypted ([#793](https://github.com/sourcenetwork/defradb/issues/793)) -* Update logo to respect theme ([#728](https://github.com/sourcenetwork/defradb/issues/728)) - -### Refactoring - -* Replace all `interface{}` with `any` alias ([#805](https://github.com/sourcenetwork/defradb/issues/805)) -* Use fastjson to parse mutation data string ([#772](https://github.com/sourcenetwork/defradb/issues/772)) -* Rework limit node flow ([#767](https://github.com/sourcenetwork/defradb/issues/767)) -* Make Option immutable ([#769](https://github.com/sourcenetwork/defradb/issues/769)) -* Rework sum and count nodes to make use of generics ([#757](https://github.com/sourcenetwork/defradb/issues/757)) -* Remove some possible panics from codebase ([#732](https://github.com/sourcenetwork/defradb/issues/732)) -* Change logging calls to use feedback in CLI package ([#714](https://github.com/sourcenetwork/defradb/issues/714)) - -### Testing - -* Add tests for aggs with nil filters ([#813](https://github.com/sourcenetwork/defradb/issues/813)) -* Add not equals filter tests ([#798](https://github.com/sourcenetwork/defradb/issues/798)) -* Fix `cli/peerid_test` to not clash addresses ([#766](https://github.com/sourcenetwork/defradb/issues/766)) -* Add change detector summary to test readme ([#754](https://github.com/sourcenetwork/defradb/issues/754)) -* Add tests for inline array grouping ([#752](https://github.com/sourcenetwork/defradb/issues/752)) - -### Continuous integration - -* Reduce test resource usage and test with file db ([#791](https://github.com/sourcenetwork/defradb/issues/791)) -* Add makefile target to verify the local module cache ([#775](https://github.com/sourcenetwork/defradb/issues/775)) -* Allow PR titles to end with a number ([#745](https://github.com/sourcenetwork/defradb/issues/745)) -* Add a workflow to validate pull request titles ([#734](https://github.com/sourcenetwork/defradb/issues/734)) -* Fix the linter version to `v1.47` ([#726](https://github.com/sourcenetwork/defradb/issues/726)) - -### Chore - -* Remove file system paths from resulting executable ([#831](https://github.com/sourcenetwork/defradb/issues/831)) -* Add goimports linter for consistent imports ordering ([#816](https://github.com/sourcenetwork/defradb/issues/816)) -* Improve UX by providing more information ([#802](https://github.com/sourcenetwork/defradb/issues/802)) -* Change to defra errors and handle errors stacktrace ([#794](https://github.com/sourcenetwork/defradb/issues/794)) -* Clean up `go.mod` with pruned module graphs ([#756](https://github.com/sourcenetwork/defradb/issues/756)) -* Update to v0.20.3 of libp2p ([#740](https://github.com/sourcenetwork/defradb/issues/740)) -* Bump to GoLang `v1.18` ([#721](https://github.com/sourcenetwork/defradb/issues/721)) - - -## [v0.3.0](https://github.com/sourcenetwork/defradb/compare/v0.2.1...v0.3.0) - -> 2022-08-02 - -DefraDB v0.3 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. - -There are *several* new features in this release, and we invite you to review the official changelog below. Some highlights are various new features for Grouping & Aggregation for the query system, like top-level aggregation and group filtering. Moreover, a brand new Query Explain system was added to introspect the execution plans created by DefraDB. Lastly we introduced a revamped CLI configuration system. - -This release does include a Breaking Change to existing v0.2.x databases. If you need help migrating an existing deployment, reach out at [hello@source.network](mailto:hello@source.network) or join our Discord at https://discord.source.network/. - -### Features - -* Add named config overrides ([#659](https://github.com/sourcenetwork/defradb/issues/659)) -* Expose color and caller log options, add validation ([#652](https://github.com/sourcenetwork/defradb/issues/652)) -* Add ability to explain `groupNode` and it's attribute(s). ([#641](https://github.com/sourcenetwork/defradb/issues/641)) -* Add primary directive for schema definitions ([@primary](https://github.com/primary)) ([#650](https://github.com/sourcenetwork/defradb/issues/650)) -* Add support for aggregate filters on inline arrays ([#622](https://github.com/sourcenetwork/defradb/issues/622)) -* Add explainable renderLimitNode & hardLimitNode attributes. ([#614](https://github.com/sourcenetwork/defradb/issues/614)) -* Add support for top level aggregates ([#594](https://github.com/sourcenetwork/defradb/issues/594)) -* Update `countNode` explanation to be consistent. ([#600](https://github.com/sourcenetwork/defradb/issues/600)) -* Add support for stdin as input in CLI ([#608](https://github.com/sourcenetwork/defradb/issues/608)) -* Explain `cid` & `field` attributes for `dagScanNode` ([#598](https://github.com/sourcenetwork/defradb/issues/598)) -* Add ability to explain `dagScanNode` attribute(s). ([#560](https://github.com/sourcenetwork/defradb/issues/560)) -* Add the ability to send user feedback to the console even when logging to file. ([#568](https://github.com/sourcenetwork/defradb/issues/568)) -* Add ability to explain `sortNode` attribute(s). ([#558](https://github.com/sourcenetwork/defradb/issues/558)) -* Add ability to explain `sumNode` attribute(s). ([#559](https://github.com/sourcenetwork/defradb/issues/559)) -* Introduce top-level config package ([#389](https://github.com/sourcenetwork/defradb/issues/389)) -* Add ability to explain `updateNode` attributes. ([#514](https://github.com/sourcenetwork/defradb/issues/514)) -* Add `typeIndexJoin` explainable attributes. ([#499](https://github.com/sourcenetwork/defradb/issues/499)) -* Add support to explain `countNode` attributes. ([#504](https://github.com/sourcenetwork/defradb/issues/504)) -* Add CORS capability to HTTP API ([#467](https://github.com/sourcenetwork/defradb/issues/467)) -* Add explaination of spans for `scanNode`. ([#492](https://github.com/sourcenetwork/defradb/issues/492)) -* Add ability to Explain the response plan. ([#385](https://github.com/sourcenetwork/defradb/issues/385)) -* Add aggregate filter support for groups only ([#426](https://github.com/sourcenetwork/defradb/issues/426)) -* Configurable caller option in logger ([#416](https://github.com/sourcenetwork/defradb/issues/416)) -* Add Average aggregate support ([#383](https://github.com/sourcenetwork/defradb/issues/383)) -* Allow summation of aggregates ([#341](https://github.com/sourcenetwork/defradb/issues/341)) -* Add ability to check DefraDB CLI version. ([#339](https://github.com/sourcenetwork/defradb/issues/339)) - -### Fixes - -* Add a check to ensure limit is not 0 when evaluating query limit and offset ([#706](https://github.com/sourcenetwork/defradb/issues/706)) -* Support multiple `--logger` flags ([#704](https://github.com/sourcenetwork/defradb/issues/704)) -* Return without an error if relation is finalized ([#698](https://github.com/sourcenetwork/defradb/issues/698)) -* Logger not correctly applying named config ([#696](https://github.com/sourcenetwork/defradb/issues/696)) -* Add content-type media type parsing ([#678](https://github.com/sourcenetwork/defradb/issues/678)) -* Remove portSyncLock deadlock condition ([#671](https://github.com/sourcenetwork/defradb/issues/671)) -* Silence cobra default errors and usage printing ([#668](https://github.com/sourcenetwork/defradb/issues/668)) -* Add stdout validation when setting logging output path ([#666](https://github.com/sourcenetwork/defradb/issues/666)) -* Consider `--logoutput` CLI flag properly ([#645](https://github.com/sourcenetwork/defradb/issues/645)) -* Handle errors and responses in CLI `client` commands ([#579](https://github.com/sourcenetwork/defradb/issues/579)) -* Rename aggregate gql types ([#638](https://github.com/sourcenetwork/defradb/issues/638)) -* Error when attempting to insert value into relationship field ([#632](https://github.com/sourcenetwork/defradb/issues/632)) -* Allow adding of new schema to database ([#635](https://github.com/sourcenetwork/defradb/issues/635)) -* Correctly parse dockey in broadcast log event. ([#631](https://github.com/sourcenetwork/defradb/issues/631)) -* Increase system's open files limit in integration tests ([#627](https://github.com/sourcenetwork/defradb/issues/627)) -* Avoid populating `order.ordering` with empties. ([#618](https://github.com/sourcenetwork/defradb/issues/618)) -* Change to supporting of non-null inline arrays ([#609](https://github.com/sourcenetwork/defradb/issues/609)) -* Assert fields exist in collection before saving to them ([#604](https://github.com/sourcenetwork/defradb/issues/604)) -* CLI `init` command to reinitialize only config file ([#603](https://github.com/sourcenetwork/defradb/issues/603)) -* Add config and registry clearing to TestLogWritesMessagesToFeedbackLog ([#596](https://github.com/sourcenetwork/defradb/issues/596)) -* Change `$eq` to `_eq` in the failing test. ([#576](https://github.com/sourcenetwork/defradb/issues/576)) -* Resolve failing HTTP API tests via cleanup ([#557](https://github.com/sourcenetwork/defradb/issues/557)) -* Ensure Makefile compatibility with macOS ([#527](https://github.com/sourcenetwork/defradb/issues/527)) -* Separate out iotas in their own blocks. ([#464](https://github.com/sourcenetwork/defradb/issues/464)) -* Use x/cases for titling instead of strings to handle deprecation ([#457](https://github.com/sourcenetwork/defradb/issues/457)) -* Handle limit and offset in sub groups ([#440](https://github.com/sourcenetwork/defradb/issues/440)) -* Issue preventing DB from restarting with no records ([#437](https://github.com/sourcenetwork/defradb/issues/437)) -* log serving HTTP API before goroutine blocks ([#358](https://github.com/sourcenetwork/defradb/issues/358)) - -### Testing - -* Add integration testing for P2P. ([#655](https://github.com/sourcenetwork/defradb/issues/655)) -* Fix formatting of tests with no extra brackets ([#643](https://github.com/sourcenetwork/defradb/issues/643)) -* Add tests for `averageNode` explain. ([#639](https://github.com/sourcenetwork/defradb/issues/639)) -* Add schema integration tests ([#628](https://github.com/sourcenetwork/defradb/issues/628)) -* Add tests for default properties ([#611](https://github.com/sourcenetwork/defradb/issues/611)) -* Specify which collection to update in test framework ([#601](https://github.com/sourcenetwork/defradb/issues/601)) -* Add tests for grouping by undefined value ([#543](https://github.com/sourcenetwork/defradb/issues/543)) -* Add test for querying undefined field ([#544](https://github.com/sourcenetwork/defradb/issues/544)) -* Expand commit query tests ([#541](https://github.com/sourcenetwork/defradb/issues/541)) -* Add cid (time-travel) query tests ([#539](https://github.com/sourcenetwork/defradb/issues/539)) -* Restructure and expand filter tests ([#512](https://github.com/sourcenetwork/defradb/issues/512)) -* Basic unit testing of `node` package ([#503](https://github.com/sourcenetwork/defradb/issues/503)) -* Test filter in filter tests ([#473](https://github.com/sourcenetwork/defradb/issues/473)) -* Add test for deletion of records in a relationship ([#329](https://github.com/sourcenetwork/defradb/issues/329)) -* Benchmark transaction iteration ([#289](https://github.com/sourcenetwork/defradb/issues/289)) - -### Refactoring - -* Improve CLI error handling and fix small issues ([#649](https://github.com/sourcenetwork/defradb/issues/649)) -* Add top-level `version` package ([#583](https://github.com/sourcenetwork/defradb/issues/583)) -* Remove extra log levels ([#634](https://github.com/sourcenetwork/defradb/issues/634)) -* Change `sortNode` to `orderNode`. ([#591](https://github.com/sourcenetwork/defradb/issues/591)) -* Rework update and delete node to remove secondary planner ([#571](https://github.com/sourcenetwork/defradb/issues/571)) -* Trim imported connor package ([#530](https://github.com/sourcenetwork/defradb/issues/530)) -* Internal doc restructure ([#471](https://github.com/sourcenetwork/defradb/issues/471)) -* Copy-paste connor fork into repo ([#567](https://github.com/sourcenetwork/defradb/issues/567)) -* Add safety to the tests, add ability to catch stderr logs and add output path validation ([#552](https://github.com/sourcenetwork/defradb/issues/552)) -* Change handler functions implementation and response formatting ([#498](https://github.com/sourcenetwork/defradb/issues/498)) -* Improve the HTTP API implementation ([#382](https://github.com/sourcenetwork/defradb/issues/382)) -* Use new logger in net/api ([#420](https://github.com/sourcenetwork/defradb/issues/420)) -* Rename NewCidV1_SHA2_256 to mixedCaps ([#415](https://github.com/sourcenetwork/defradb/issues/415)) -* Remove utils package ([#397](https://github.com/sourcenetwork/defradb/issues/397)) -* Rework planNode Next and Value(s) function ([#374](https://github.com/sourcenetwork/defradb/issues/374)) -* Restructure aggregate query syntax ([#373](https://github.com/sourcenetwork/defradb/issues/373)) -* Remove dead code from client package and document remaining ([#356](https://github.com/sourcenetwork/defradb/issues/356)) -* Restructure datastore keys ([#316](https://github.com/sourcenetwork/defradb/issues/316)) -* Add commits lost during github outage ([#303](https://github.com/sourcenetwork/defradb/issues/303)) -* Move public members out of core and base packages ([#295](https://github.com/sourcenetwork/defradb/issues/295)) -* Make db stuff internal/private ([#291](https://github.com/sourcenetwork/defradb/issues/291)) -* Rework client.DB to ensure interface contains only public types ([#277](https://github.com/sourcenetwork/defradb/issues/277)) -* Remove GetPrimaryIndexDocKey from collection interface ([#279](https://github.com/sourcenetwork/defradb/issues/279)) -* Remove DataStoreKey from (public) dockey struct ([#278](https://github.com/sourcenetwork/defradb/issues/278)) -* Renormalize to ensure consistent file line termination. ([#226](https://github.com/sourcenetwork/defradb/issues/226)) -* Strongly typed key refactor ([#17](https://github.com/sourcenetwork/defradb/issues/17)) - -### Documentation - -* Use permanent link to BSL license document ([#692](https://github.com/sourcenetwork/defradb/issues/692)) -* README update v0.3.0 ([#646](https://github.com/sourcenetwork/defradb/issues/646)) -* Improve code documentation ([#533](https://github.com/sourcenetwork/defradb/issues/533)) -* Add CONTRIBUTING.md ([#531](https://github.com/sourcenetwork/defradb/issues/531)) -* Add package level docs for logging lib ([#338](https://github.com/sourcenetwork/defradb/issues/338)) - -### Tooling - -* Include all touched packages in code coverage ([#673](https://github.com/sourcenetwork/defradb/issues/673)) -* Use `gotestsum` over `go test` ([#619](https://github.com/sourcenetwork/defradb/issues/619)) -* Update Github pull request template ([#524](https://github.com/sourcenetwork/defradb/issues/524)) -* Fix the cross-build script ([#460](https://github.com/sourcenetwork/defradb/issues/460)) -* Add test coverage html output ([#466](https://github.com/sourcenetwork/defradb/issues/466)) -* Add linter rule for `goconst`. ([#398](https://github.com/sourcenetwork/defradb/issues/398)) -* Add github PR template. ([#394](https://github.com/sourcenetwork/defradb/issues/394)) -* Disable auto-fixing linter issues by default ([#429](https://github.com/sourcenetwork/defradb/issues/429)) -* Fix linting of empty `else` code blocks ([#402](https://github.com/sourcenetwork/defradb/issues/402)) -* Add the `gofmt` linter rule. ([#405](https://github.com/sourcenetwork/defradb/issues/405)) -* Cleanup linter config file ([#400](https://github.com/sourcenetwork/defradb/issues/400)) -* Add linter rule for copyright headers ([#360](https://github.com/sourcenetwork/defradb/issues/360)) -* Organize our config files and tooling. ([#336](https://github.com/sourcenetwork/defradb/issues/336)) -* Limit line length to 100 characters (linter check) ([#224](https://github.com/sourcenetwork/defradb/issues/224)) -* Ignore db/tests folder and the bench marks. ([#280](https://github.com/sourcenetwork/defradb/issues/280)) - -### Continuous Integration - -* Fix circleci cache permission errors. ([#371](https://github.com/sourcenetwork/defradb/issues/371)) -* Ban extra elses ([#366](https://github.com/sourcenetwork/defradb/issues/366)) -* Fix change-detection to not fail when new tests are added. ([#333](https://github.com/sourcenetwork/defradb/issues/333)) -* Update golang-ci linter and explicit go-setup to use v1.17 ([#331](https://github.com/sourcenetwork/defradb/issues/331)) -* Comment the benchmarking result comparison to the PR ([#305](https://github.com/sourcenetwork/defradb/issues/305)) -* Add benchmark performance comparisons ([#232](https://github.com/sourcenetwork/defradb/issues/232)) -* Add caching / storing of bench report on default branch ([#290](https://github.com/sourcenetwork/defradb/issues/290)) -* Ensure full-benchmarks are ran on a PR-merge. ([#282](https://github.com/sourcenetwork/defradb/issues/282)) -* Add ability to control benchmarks by PR labels. ([#267](https://github.com/sourcenetwork/defradb/issues/267)) - -### Chore - -* Update APL to refer to D2 Foundation ([#711](https://github.com/sourcenetwork/defradb/issues/711)) -* Update gitignore to include `cmd` folders ([#617](https://github.com/sourcenetwork/defradb/issues/617)) -* Enable random execution order of tests ([#554](https://github.com/sourcenetwork/defradb/issues/554)) -* Enable linters exportloopref, nolintlint, whitespace ([#535](https://github.com/sourcenetwork/defradb/issues/535)) -* Add utility for generation of man pages ([#493](https://github.com/sourcenetwork/defradb/issues/493)) -* Add Dockerfile ([#517](https://github.com/sourcenetwork/defradb/issues/517)) -* Enable errorlint linter ([#520](https://github.com/sourcenetwork/defradb/issues/520)) -* Binaries in`cmd` folder, examples in `examples` folder ([#501](https://github.com/sourcenetwork/defradb/issues/501)) -* Improve log outputs ([#506](https://github.com/sourcenetwork/defradb/issues/506)) -* Move testing to top-level `tests` folder ([#446](https://github.com/sourcenetwork/defradb/issues/446)) -* Update dependencies ([#450](https://github.com/sourcenetwork/defradb/issues/450)) -* Update go-ipfs-blockstore and ipfs-lite ([#436](https://github.com/sourcenetwork/defradb/issues/436)) -* Update libp2p dependency to v0.19 ([#424](https://github.com/sourcenetwork/defradb/issues/424)) -* Update ioutil package to io / os packages. ([#376](https://github.com/sourcenetwork/defradb/issues/376)) -* git ignore vscode ([#343](https://github.com/sourcenetwork/defradb/issues/343)) -* Updated README.md contributors section ([#292](https://github.com/sourcenetwork/defradb/issues/292)) -* Update changelog v0.2.1 ([#252](https://github.com/sourcenetwork/defradb/issues/252)) - - -## [v0.2.1](https://github.com/sourcenetwork/defradb/compare/v0.2.0...v0.2.1) - -> 2022-03-04 - -### Features - -* Add ability to delete multiple documents using filter ([#206](https://github.com/sourcenetwork/defradb/issues/206)) -* Add ability to delete multiple documents, using multiple ids ([#196](https://github.com/sourcenetwork/defradb/issues/196)) - -### Fixes - -* Concurrency control of Document using RWMutex ([#213](https://github.com/sourcenetwork/defradb/issues/213)) -* Only log errors and above when benchmarking ([#261](https://github.com/sourcenetwork/defradb/issues/261)) -* Handle proper type conversion on sort nodes ([#228](https://github.com/sourcenetwork/defradb/issues/228)) -* Return empty array if no values found ([#223](https://github.com/sourcenetwork/defradb/issues/223)) -* Close fetcher on error ([#210](https://github.com/sourcenetwork/defradb/issues/210)) -* Installing binary using defradb name ([#190](https://github.com/sourcenetwork/defradb/issues/190)) - -### Tooling - -* Add short benchmark runner option ([#263](https://github.com/sourcenetwork/defradb/issues/263)) - -### Documentation - -* Add data format changes documentation folder ([#89](https://github.com/sourcenetwork/defradb/issues/89)) -* Correcting typos ([#143](https://github.com/sourcenetwork/defradb/issues/143)) -* Update generated CLI docs ([#208](https://github.com/sourcenetwork/defradb/issues/208)) -* Updated readme with P2P section ([#220](https://github.com/sourcenetwork/defradb/issues/220)) -* Update old or missing license headers ([#205](https://github.com/sourcenetwork/defradb/issues/205)) -* Update git-chglog config and template ([#195](https://github.com/sourcenetwork/defradb/issues/195)) - -### Refactoring - -* Introduction of logging system ([#67](https://github.com/sourcenetwork/defradb/issues/67)) -* Restructure db/txn/multistore structures ([#199](https://github.com/sourcenetwork/defradb/issues/199)) -* Initialize database in constructor ([#211](https://github.com/sourcenetwork/defradb/issues/211)) -* Purge all println and ban it ([#253](https://github.com/sourcenetwork/defradb/issues/253)) - -### Testing - -* Detect and force breaking filesystem changes to be documented ([#89](https://github.com/sourcenetwork/defradb/issues/89)) -* Boost collection test coverage ([#183](https://github.com/sourcenetwork/defradb/issues/183)) - -### Continuous integration - -* Combine the Lint and Benchmark workflows so that the benchmark job depends on the lint job in one workflow ([#209](https://github.com/sourcenetwork/defradb/issues/209)) -* Add rule to only run benchmark if other check are successful ([#194](https://github.com/sourcenetwork/defradb/issues/194)) -* Increase linter timeout ([#230](https://github.com/sourcenetwork/defradb/issues/230)) - -### Chore - -* Remove commented out code ([#238](https://github.com/sourcenetwork/defradb/issues/238)) -* Remove dead code from multi node ([#186](https://github.com/sourcenetwork/defradb/issues/186)) - - -## [v0.2.0](https://github.com/sourcenetwork/defradb/compare/v0.1.0...v0.2.0) - -> 2022-02-07 - -DefraDB v0.2 is a major pre-production release. Until the stable version 1.0 is reached, the SemVer minor patch number will denote notable releases, which will give the project freedom to experiment and explore potentially breaking changes. - -This release is jam-packed with new features and a small number of breaking changes. Read the full changelog for a detailed description. Most notable features include a new Peer-to-Peer (P2P) data synchronization system, an expanded query system to support GroupBy & Aggregate operations, and lastly TimeTraveling queries allowing to query previous states of a document. - -Much more than just that has been added to ensure we're building reliable software expected of any database, such as expanded test & benchmark suites, automated bug detection, performance gains, and more. - -This release does include a Breaking Change to existing v0.1 databases regarding the internal data model, which affects the "Content Identifiers" we use to generate DocKeys and VersionIDs. If you need help migrating an existing deployment, reach out at hello@source.network or join our Discord at https://discord.source.network. - -### Features - -* Added Peer-to-Peer networking data synchronization ([#177](https://github.com/sourcenetwork/defradb/issues/177)) -* TimeTraveling (History Traversing) query engine and doc fetcher ([#59](https://github.com/sourcenetwork/defradb/issues/59)) -* Add Document Deletion with a Key ([#150](https://github.com/sourcenetwork/defradb/issues/150)) -* Add support for sum aggregate ([#121](https://github.com/sourcenetwork/defradb/issues/121)) -* Add support for lwwr scalar arrays (full replace on update) ([#115](https://github.com/sourcenetwork/defradb/issues/115)) -* Add count aggregate support ([#102](https://github.com/sourcenetwork/defradb/issues/102)) -* Add support for named relationships ([#108](https://github.com/sourcenetwork/defradb/issues/108)) -* Add multi doc key lookup support ([#76](https://github.com/sourcenetwork/defradb/issues/76)) -* Add basic group by functionality ([#43](https://github.com/sourcenetwork/defradb/issues/43)) -* Update datastore packages to allow use of context ([#48](https://github.com/sourcenetwork/defradb/issues/48)) - -### Bug fixes - -* Only add join if aggregating child object collection ([#188](https://github.com/sourcenetwork/defradb/issues/188)) -* Handle errors generated during input object thunks ([#123](https://github.com/sourcenetwork/defradb/issues/123)) -* Remove new types from in-memory cache on generate error ([#122](https://github.com/sourcenetwork/defradb/issues/122)) -* Support relationships where both fields have the same name ([#109](https://github.com/sourcenetwork/defradb/issues/109)) -* Handle errors generated in fields thunk ([#66](https://github.com/sourcenetwork/defradb/issues/66)) -* Ensure OperationDefinition case has at least one selection([#24](https://github.com/sourcenetwork/defradb/pull/24)) -* Close datastore iterator on scan close ([#56](https://github.com/sourcenetwork/defradb/pull/56)) (resulted in a panic when using limit) -* Close superseded iterators before orphaning ([#56](https://github.com/sourcenetwork/defradb/pull/56)) (fixes a panic in the join code) -* Move discard to after error check ([#88](https://github.com/sourcenetwork/defradb/pull/88)) (did result in panic if transaction creation fails) -* Check for nil iterator before closing document fetcher ([#108](https://github.com/sourcenetwork/defradb/pull/108)) - -### Tooling -* Added benchmark suite ([#160](https://github.com/sourcenetwork/defradb/issues/160)) - -### Documentation - -* Correcting comment typos ([#142](https://github.com/sourcenetwork/defradb/issues/142)) -* Correcting README typos ([#140](https://github.com/sourcenetwork/defradb/issues/140)) - -### Testing - -* Add transaction integration tests ([#175](https://github.com/sourcenetwork/defradb/issues/175)) -* Allow running of tests using badger-file as well as IM options ([#128](https://github.com/sourcenetwork/defradb/issues/128)) -* Add test datastore selection support ([#88](https://github.com/sourcenetwork/defradb/issues/88)) - -### Refactoring - -* Datatype modification protection ([#138](https://github.com/sourcenetwork/defradb/issues/138)) -* Cleanup Linter Complaints and Setup Makefile ([#63](https://github.com/sourcenetwork/defradb/issues/63)) -* Rework document rendering to avoid data duplication and mutation ([#68](https://github.com/sourcenetwork/defradb/issues/68)) -* Remove dependency on concrete datastore implementations from db package ([#51](https://github.com/sourcenetwork/defradb/issues/51)) -* Remove all `errors.Wrap` and update them with `fmt.Errorf`. ([#41](https://github.com/sourcenetwork/defradb/issues/41)) -* Restructure integration tests to provide better visibility ([#15](https://github.com/sourcenetwork/defradb/pull/15)) -* Remove schemaless code branches ([#23](https://github.com/sourcenetwork/defradb/pull/23)) - -### Performance -* Add badger multi scan support ([#85](https://github.com/sourcenetwork/defradb/pull/85)) -* Add support for range spans ([#86](https://github.com/sourcenetwork/defradb/pull/86)) - -### Continous integration - -* Use more accurate test coverage. ([#134](https://github.com/sourcenetwork/defradb/issues/134)) -* Disable Codecov's Patch Check -* Make codcov less strict for now to unblock development ([#125](https://github.com/sourcenetwork/defradb/issues/125)) -* Add codecov config file. ([#118](https://github.com/sourcenetwork/defradb/issues/118)) -* Add workflow that runs a job on AWS EC2 instance. ([#110](https://github.com/sourcenetwork/defradb/issues/110)) -* Add Code Test Coverage with CodeCov ([#116](https://github.com/sourcenetwork/defradb/issues/116)) -* Integrate GitHub Action for golangci-lint Annotations ([#106](https://github.com/sourcenetwork/defradb/issues/106)) -* Add Linter Check to CircleCi ([#92](https://github.com/sourcenetwork/defradb/issues/92)) - -### Chore - -* Remove the S1038 rule of the gosimple linter. ([#129](https://github.com/sourcenetwork/defradb/issues/129)) -* Update to badger v3, and use badger as default in memory store ([#56](https://github.com/sourcenetwork/defradb/issues/56)) -* Make Cid versions consistent ([#57](https://github.com/sourcenetwork/defradb/issues/57)) \ No newline at end of file diff --git a/docs/sourcehub/concepts/_category_.json b/docs/sourcehub/concepts/_category_.json new file mode 100644 index 0000000..4d7732d --- /dev/null +++ b/docs/sourcehub/concepts/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Concepts", + "position": 4 + } \ No newline at end of file diff --git a/docs/sourcehub/concepts/zanzibar.md b/docs/sourcehub/concepts/zanzibar.md new file mode 100644 index 0000000..7b3ab0d --- /dev/null +++ b/docs/sourcehub/concepts/zanzibar.md @@ -0,0 +1,248 @@ +--- +date: 2023-09-08 +title: Zanzibar Access Control +--- + +## Introduction + +[Zanzibar](https://research.google/pubs/pub48190/) is an authorization service introduced by Google to manage access control across its services. +Its primary function is to evaluate access requests by answering: + +> Can user **U** perform operation **O** on object **A**? + +This article explores Zanzibar's access control model, how it handles access requests, and a conceptual framework for understanding it. + +## Relation Based Access Control Model + +Access control determines who or what can operate on a given resource within a system. Zanzibar implements a model closely aligned with [Relation-Based Access Control (RelBAC)](https://ieeexplore.ieee.org/abstract/document/4725889). RelBAC defines permissions based on relationships between entities. Similar to relational databases and object-oriented modeling, it establishes access rules using entity relationships. + +E.g. A book is written by an author. The book and author are connected by the authored relation. This relation implies permissions—an author should have read and edit access to their book. + +Such relationships exist across various domains, often represented using specialized languages like description logic. RelBAC leverages these relationships to determine permissions dynamically. + +## Zanzibar's RelBAC + +A key concept in Zanzibar is the **Relation Tuple**, which represents relationships between system objects. + +### Relation Tuple + +``` +tuple := (object, relation, user) +object := namespace:id +user := object | (object, relation) +relation := string +namespace := string +id := string +``` + +A Relation Tuple is a **3-tuple** containing: + +1. **Object** – The entity being referenced. +2. **Relation** – A named association between entities. +3. **User** – The entity granted access, which can be: + - A direct reference to an object. + - A **userset** (an indirect group reference). + +E.g. The tuple `(article:zanzibar, publisher, corporation:google)` defines a relationship where **Google (corporation:google) is the publisher of the Zanzibar article (article:zanzibar)**. + +### Usersets + +A **userset** is a special form of `user` defined as `(object, relation)`, grouping users who share the same relation to an object. + +#### Example + +Consider the userset `(group:engineering, member)` and the tuples: + +- `(group:engineering, member, user:bob)` +- `(group:engineering, member, user:alice)`. + +The userset `(group:engineering, member)` expands to include **Bob and Alice** as members. + +### Recursive Definitions + +Usersets introduce a layer of indirection in Relation Tuples, allowing tuples to reference other usersets. This enables **recursive definitions**, where a tuple can specify a userset that, in turn, references another userset, supporting complex access hierarchies. + +## The Relation Graph + +A key insight into Zanzibar is recognizing that a set of Tuples forms a **Relation Graph**. + +### Structure of the Relation Graph + +A Relation Tuple can be rewritten as a **pair of pairs**: +- ((object, relation), (object, relation)) + +These **object-relation pairs** are called **Relation Nodes**. The first pair represents the Tuple’s **Object and Relation**, while the second represents a **userset**. If relations are allowed to be empty, this structure accommodates all cases from the original definition. + +Each **Relation Tuple** defines an **Edge** in the Relation Graph, and each **Node** corresponds to an object-relation pair. + +### Example + +Given the following tuples: + +- ("file:readme", "owner", "bob") +- ("file:readme", "reader", "group:engineering", "member") +- ("group:engineering", "member", "alice") + +The corresponding Relation Graph looks like this: + +![Relation Graph Example](/img/sourcehub/relgraph-simple.png) + +### Using the Relation Graph + +The Relation Graph provides a **system-wide view** of all objects and their relationships. It enables answering questions such as: + +> Does user **U** have relation **R** with object **O**? + +This question is resolved by starting at node `(O, R)`, traversing the graph, and checking for user **U**. + +If **R** represents a permission (e.g., `"read"`, `"write"`, `"delete"`), the Relation Graph serves as a structured way to enforce and evaluate access control. + +## Userset Rewrite Rules + +Relation Tuples provide a **generic and powerful** model for representing **relations and permissions** in a system. However, they are minimalistic, often leading to redundant tuples. + +Historically, **grouping relations** and **grouping objects** have been crucial in access control. Theoretically, these features are essential for Zanzibar to qualify as a **Relation-Based Access Control (RelBAC)** implementation. + +To address this, Zanzibar introduces **Userset Rewrite Rules**. + +### Overview of Userset Rewrite Rules + +Userset Rewrite Rules are not immediately intuitive. + +A **rule** functions as a **transformation** on a **Relation Node** `(object, relation)`, returning a set of **descendant Relation Nodes**. + +#### Example + +Let **A** be a **Relation Node** and **R** be a **Rewrite Rule**. Then: + +R(A) → {B, C, D} + +This means that applying **R** to **A** produces a **set of descendant Relation Nodes** `{B, C, D}`. + +Rules are associated with specific **relations** and **execute at runtime**, dynamically resolving permissions as Zanzibar traverses the Relation Graph. + +### Permissions hierarchy and computed usersets + +A permission hierarchy simplifies access control by linking stronger and weaker permissions. If a user has permission to perform a higher-level action, they automatically gain permission for related lower-level actions. For example, the ability to write typically includes the ability to read. This hierarchy reduces administrative effort and simplifies rule management in an Access Control System. + +Refer to the following Relation Graph for a visual representation. + +![Relation Graph Example](/img/sourcehub/relgraph-simple.png) + +In our system, Bob is both a *reader* and an *owner*. If ownership always implies read access, maintaining redundant permission tuples adds unnecessary overhead. Zanzibar addresses this with the **Computed Userset** rule, which dynamically links relations in the permission hierarchy. To enforce that *owner* always implies *reader*, we define a **Computed Userset("owner")** rule for the *reader* relation. This rule automatically extends permissions without requiring additional tuples. + +When checking if Bob has *reader* access to `file:readme`, Zanzibar follows these steps: + +1. Start at the node `("file:readme", "reader")` and check for associated rules. +1. Detect the **Computed Userset("owner")** rule, which creates a new relation node `("file:readme", "owner")` as a successor of `("file:readme", "reader")`. +1. Continue searching through `("file:readme", "owner")`. + +This approach ensures efficient permission management by dynamically resolving inherited access rights. + +![Computed Userset Evaluation](/img/sourcehub/cu-annotated.png) + +Using a Computed Userset we successfully added a rule to Zanzibar which automatically derives one relation from another. +This enable users to define a set of global rules for a relation as opposed to adding additional Relation Tuples for each object in the system. +This powerful mechanism greatly decreases the maintainability cost associated to Relation Tuples. + +### Object Hierarchy and Tuple to Userset + +Before exploring the **Tuple to Userset** rule, let's revisit the core idea of **RelBAC**: relationships between system objects define permissions and determine access control. We have already seen how **Relation Tuples** link objects to users and how **Usersets** establish hierarchies and group users. However, we have yet to define how Zanzibar can create **relationships between objects** and group them accordingly. This is where the **Tuple to Userset** rule comes into play. + +To illustrate this concept, let's consider a familiar **filesystem permission model**. + +A filesystem consists of: + +- **Directories**, **files**, and **users** +- Users who **own** files and directories +- Directories that **contain** files + +In this example, having **read** permission on a directory should automatically grant **read** permission for all files within that directory. The **Tuple to Userset** rule enables this by defining relationships between objects dynamically. + +The following **Relation Graph** illustrates the existing **Tuples** in the system: + +![File system example](/img/sourcehub/ttu-relgraph.png) + +We need to express two key relationships: + +1. Bob’s `readme` file is inside Alice’s `/home` directory. +1. Since the file is in her directory, Alice should have **read** access to it. + +One way to grant Alice read access is to create a **Relation Tuple** that links the file to the directory’s owners. +The tuple `(file:/home/readme, reader, (directory:/home, owner))` ensures that **directory owners automatically become file readers**. + +The updated **Relation Graph** now looks like this: + +![File System Relation Graph with Relation from File to Directory owners](/img/sourcehub/ttu-relgraph-2.png) + +While this grants the correct **permissions**, it does not establish an explicit **relationship** between the file and the directory itself. The **Relation Tuple** only states that **directory owners are also file readers**, but it does not indicate that `file:/home/readme` is actually **contained** within `directory:/home`. + +To fully represent the **structure** of the filesystem, we need a way to explicitly define object relationships. This is where the **Tuple to Userset** rule becomes essential. + +What we truly need is a way to **declare a relationship between a file and a directory** and, from that relationship, derive the **set of directory owners**. The following image illustrates this concept: + +![File System Relation Graph parent relation](/img/sourcehub/ttu-relgraph-3.png) + +From this representation, we see that the file is linked to its directory through the **`parent`** relation. This explicitly defines the **structural** relationship between objects in the system. However, to complete the access control logic, we need a way to trace a path from the `("/home/readme", "reader")` node to the `("/home", "owner")` node. + +One possible approach would be to use: + +- A **Computed Userset** rule, linking `reader` to `parent` +- An additional **Relation Tuple** between `/home` and `/home, owner` + +However, this method **mixes object relationships with access control rules**, making it difficult to separate concerns. This is exactly the problem that the **Tuple to Userset** rule solves. + +The **Tuple to Userset** rule is essentially a **Tuple query** combined with a **Computed Userset** rule. It takes two arguments: + +- **Tupleset Filter** – Specifies the relation between objects +- **Computed Userset Relation** – Defines how permissions should propagate + +1. The rule first **rewrites the current Relation Node** using the **Tupleset Filter**. +2. Using this new node, it **fetches all successor nodes**. +3. The successor set is then processed through a **Computed Userset Rewrite**, applying the supplied **Computed Userset Relation**. + +The **Tuple to Userset** rule is powerful because it allows an application to **declare relationships between objects** without embedding access control logic in the application itself. This enables **object hierarchies** to be seamlessly translated into **access control rules**. + +The key benefit of **Tuple to Userset** is that applications only need to create **Tuples expressing object relationships**—without extra tuples just for permission derivation. + +This has **profound implications**: + +- Applications no longer need to **explicitly track access control logic**. +- The rules of access control remain **entirely within Zanzibar**. +- The **application layer is unaware of access rules**, focusing solely on object relationships. + +By enabling complex object hierarchies to be **natively converted into access control rules**, the **Tuple to Userset** rule fundamentally decouples **permission management** from application logic. + +Finally, let's see the TupleToUserset rule in action. + +Let the rule `TupleToUserset(tupleset_filter: "parent", computed_userset: "reader")` be associated with the `"reader"` relation, and assume the same Relation Tuples as shown earlier. + +Evaluating the TupleToUserset rule follows these steps: + +1. Start at the Relation Node `(file:/home/readme, reader)`. +2. Evaluate the rule `TupleToUserset(tupleset_filter: "parent", computed_userset: "reader")`. +3. Build a filter using `"tupleset_filter"` -> `(file:/home/readme, parent)`. +4. Fetch all successors of the filter from step 3 -> `[(directory:/home)]`. +5. Apply a Computed Userset rewrite rule using the `"computed_userset"` relation for each fetched Relation Node -> `[(directory:/home, reader)]`. + +This process ensures that the `"reader"` permission propagates correctly through the object hierarchy. The following image further illustrates the concept: + +![Tuple to Userset Evaluation](/img/sourcehub/ttu-eval.png) + +Effectively, the Tuple to Userset added a path from the `/home/readme, reader` node to the `/home, owner` nodes. +The following image shows the edges the rule added: + +![](/img/sourcehub/ttu-relgraph-annotated.png) + +### Rewrite Rule Expression + +In Zanzibar, a relation can have multiple rewrite rules. These rules are combined to form Rewrite Rule expressions. Each rewrite rule returns a set of Relation Nodes (or usersets), which are then combined using set operations defined in the Rewrite Rule Expression. The available set operations are union, difference, and intersection. When evaluating each Rewrite Rule, the resulting sets of Nodes are processed through these set operations. The final evaluated set of Nodes represents all the successors of a parent Relation Node. + +### Conclusion + +The Relation Tuples define a graph of object relationships, which is used to resolve Access Requests. Usersets enable grouping users and applying relations to an entire set of users. The Userset Rewrite Rules allow the creation of Relation Hierarchies and support Object Hierarchies. These features are critical for keeping Access Control Logic within Zanzibar. While the Relation Graph was useful for illustration, in practice, Zanzibar dynamically constructs the graph from Relation Tuples and evaluates the Rewrite Rules. This synergy between Relation Tuples and Rewrite Rules powers Zanzibar’s Access Control Model. From this perspective, Zanzibar’s API operates on the dynamic Relation Graph. The `Check` API call corresponds to a graph reachability problem. The `Expand` API serves as a debugging tool, exposing the Goal Tree used during the recursive expansion of Rewrite Rules and successor retrieval. + +# References + +- Zanzibar: +- RelBAC: diff --git a/docs/sourcehub/getting-started/1-readme.md b/docs/sourcehub/getting-started/1-readme.md new file mode 100644 index 0000000..9774bc5 --- /dev/null +++ b/docs/sourcehub/getting-started/1-readme.md @@ -0,0 +1,27 @@ +# Getting Started + +To get started with SourceHub, we need to download and initialize our local client. This section will utilize the `CLI` but equivalent functionality is available using the programmatic embedded API. + +## 1. Install SourceHub +First, we will download the SourceHub binary which includes a client. +### Precompiled +You can get precompiled binaries from our Github Release page [here](https://github.com/sourcenetwork/sourcehub/releases) or using your console: +```bash +cd $HOME +wget https://github.com/sourcenetwork/sourcehub/releases/download/v0.2.0/sourcehubd +chmod +x sourcehubd +sudo mv sourcehubd /usr/bin +``` + +### From Source Code +You can download the code and compile your own binaries if you prefer. However you will need a local installation of the Go toolchain with a minimum version of 1.22. +```bash +cd $HOME +git clone https://github.com/sourcenetwork/sourcehub +cd sourcehub +git checkout v0.2.0 +make install +export PATH=$PATH:$GOBIN +``` + +Next we will setup our local client wallet account. \ No newline at end of file diff --git a/docs/sourcehub/getting-started/2-account.md b/docs/sourcehub/getting-started/2-account.md new file mode 100644 index 0000000..de6335d --- /dev/null +++ b/docs/sourcehub/getting-started/2-account.md @@ -0,0 +1,46 @@ +# Account Setup + +Now we will create a new keypair and configure our client wallet. + +## Adding a key +The following command will generate a new random private key with the name `` +```bash +sourcehubd keys add +``` +This will output the newly generated public key, address, and mnemonic (make sure to back up the mnemonic). + +![Wallet output](/img/sourcehub/key-add-output.png) + +### Importing an existing mnemonic +If you want to import a key from an existing mnemonic, you can use the `--recover` option when adding the key: +```bash +sourcehubd keys add --recover +``` + +Then input your existing mnemonic when prompted. + +:::warning +You MUST ensure that you sufficiently back up your mnemonic. Failure to do so may result in lost access to your wallet. +::: + +## Configuring client +Now we can update the `CLI` client config to use the correct RPC node and ChainID. The RPC node is how we access the network API to send transactions and queries. RPC nodes are specific to certain networks, and must match the provided ChainID. +```bash +sourcehubd config set client chain-id +sourcehubd config set client node +``` + +## Faucet +Next we can load our account with some $OPEN tokens from the network faucet. You can find the faucet for the current testnet [here](https://faucet.source.network/). + +Using the `
` from above when we created our wallet keypair (You can access your wallet keypair using `sourcehubd keys show --address`), we can have the faucet seed our wallet with enough tokens to send a few transactions to get started. + +![wallet faucet](/img/sourcehub/faucet.png) + +## Verify +Finally, we can verify that our account exists and that it has been loaded with the initial faucet tokens. +```bash +sourcehubd query bank balances +``` + +That's it, we now have a newly generated local wallet via the `CLI` client. Now we can interact with the network! First up, we will be creating an ACP policy, seeding it with some state, and executing authorization checks. \ No newline at end of file diff --git a/docs/sourcehub/getting-started/3-create-a-policy.md b/docs/sourcehub/getting-started/3-create-a-policy.md new file mode 100644 index 0000000..80793cb --- /dev/null +++ b/docs/sourcehub/getting-started/3-create-a-policy.md @@ -0,0 +1,63 @@ +--- +title: Create a Policy +--- +# Create an ACP Policy +Now that we have a fully configured `CLI` client, we can start executing transactions and queries against the SourceHub network. Our first task will be to create a new ACP Policy. + +The SourceHub ACP Module is a [Zanzibar](/sourcehub/concepts/zanzibar) based global decentralized authorization system. Developers write policies using our Relation-Based Access Control (RelBAC) DSL, which allows you to define resources, relations, and permissions. + +- **Resources**: Generic container for some kind of "thing" you wish to gate access to or provide authorization for. It can be anything from a document on [DefraDB](/defradb), a Secret on [Orbis](/orbis), or any other digital resource. + +- **Relation**: Named connections between resources. Similar to a database table that might have relations between its schema types, so does the SourceHub ACP module. This allows us to create expressive policies that go beyond traditional *Role-Based* or *Attribute-based* access control. + +- **Permissions**: Computed queries over resources, relations, and even other permissions (they're recursive!). + +## Example +Let's create a very basic example policy which defines a single resource named `note` with both an `owner` and `collaborator` relation which are both of type `actor`. + +Create a file named `basic-policy.yaml` and paste the following: +```yaml +name: Basic Policy + +resources: + + note: + relations: + owner: + types: + - actor + collaborator: + types: + - actor + permissions: + read: + expr: owner + collaborator + edit: + expr: owner + delete: + expr: owner +``` + +Here we are also defining 3 permissions. + +The `read` permission is expressed as `owner + collaborator` which means *if* you have either an `owner` or `collaborator` relation *then* you have the `read` permission. + +Both the `edit` and `delete` permissions are reserved solely for those with the `owner` relation. + +:::info +Traditionally we define relations as nouns and permissions as verbs. This is because we often understand authorization as some *thing* (noun) performing some *action* (verb) on some resource. +::: + +### Upload Policy +Now that we have defined our policy, we can upload it to SourceHub. +```bash +sourcehubd tx acp create-policy basic-policy.yaml --from +``` + +Then, to get the policy we can list the existing policies. + +```bash +sourcehubd q acp policy-ids +``` + +![Policy IDs](/img/sourcehub/policy-ids-1.png) \ No newline at end of file diff --git a/docs/sourcehub/getting-started/4-acp-check.md b/docs/sourcehub/getting-started/4-acp-check.md new file mode 100644 index 0000000..14c2567 --- /dev/null +++ b/docs/sourcehub/getting-started/4-acp-check.md @@ -0,0 +1,54 @@ +--- +title: Access Requests +--- +# Check a Policy + +Now that we have an existing account and created policy, we're going to seed it with some resources and evaluate some `Check` requests. + +> A [`Check`](zanzibar-concept) request is how we determine if a given action on a resource by an actor is allowed. + +We are using the policy we created for our basic note app from the last step, which has a policy id of `a3cc042579639c4b36357217a197e0bb17bdbb54ff322d4b52e4bba4d19548bf`. + +First, we need to "Register" a resource object, which will create an entry in the ACP system that establishes who the owner of an object is. The command is `sourcehubd tx acp direct-policy-cmd register-object ` where `resource-type` is `note` (as defined in the policy resources) and `resource-name` is any name we want to assign to our resource, in this case it will be `alice-grocery-list`. + +The full command is: +```bash +sourcehubd tx acp direct-policy-cmd register-object a3cc042579639c4b36357217a197e0bb17bdbb54ff322d4b52e4bba4d19548bf note alice-grocery-list --from +``` + +We now have created the `alice-grocery-list` and registered the `owner` as whatever wallet you used for ``. + +To verify this, we can run a request to inspect the resource owner. +```bash +sourcehubd q acp object-owner a3cc042579639c4b36357217a197e0bb17bdbb54ff322d4b52e4bba4d19548bf note alice-grocery-list +``` + +Which will result in something like: +![Object owner](/img/sourcehub/object-owner.png) + +The interesting part here is that the `owner_id` is encoded as a [DID Key](https://w3c-ccg.github.io/did-method-key/) identifier (`did:key:zQ3sha81FK34V8PrB7rbUq9ZbUvRKQZqW5CMqyjer2YQdwFWb` which is the wallet public key we used to register the object originally). + +## Add a Collaborator +After registering an object and verifying its owner, we can add a collaborator, and see how the authorization updates. + +We are going to introduce a new actor `BOB` who has an identity `did:key:z6Mkmyi3eCUYJ6w2fbgpnf77STLcnMf6tuJ56RQmrFjce6XS`. + +To add `BOB` as a `collaborator` (which is a specific relation defined on the [policy](example-basic-policy)) we can issue a `set-relationship ` where `` is `collaborator` and `` is the BOB identity above (`did:key:z6Mkmyi3eCUYJ6w2fbgpnf77STLcnMf6tuJ56RQmrFjce6XS`). +``` +sourcehubd tx acp direct-policy-cmd set-relationship a3cc042579639c4b36357217a197e0bb17bdbb54ff322d4b52e4bba4d19548bf note alice-grocery-list collaborator did:key:z6Mkmyi3eCUYJ6w2fbgpnf77STLcnMf6tuJ56RQmrFjce6XS --from +``` + +We can now verify that the access to Alice's grocery list resource is properly enabled. We can issue a `verify-access-request :#` query which will check if the permission can be evaluated for the given resource and subject. Here we will check for the `read` permission, which according to the policy is true for `OWNERs` and `COLLABORATORs`. +```bash +sourcehubd q acp verify-access-request a3cc042579639c4b36357217a197e0bb17bdbb54ff322d4b52e4bba4d19548bf did:key:z6Mkmyi3eCUYJ6w2fbgpnf77STLcnMf6tuJ56RQmrFjce6XS note:alice-grocery-list#read +``` + +Which will return `valid: true`. + +Let's check for a permission that Bob *shouldn't* have, like `edit` which is reserved for `OWNERs`. +```bash +sourcehubd q acp verify-access-request a3cc042579639c4b36357217a197e0bb17bdbb54ff322d4b52e4bba4d19548bf did:key:z6Mkmyi3eCUYJ6w2fbgpnf77STLcnMf6tuJ56RQmrFjce6XS note:alice-grocery-list#edit +``` + +Which will return `valid: false` + diff --git a/docs/sourcehub/getting-started/_category_.json b/docs/sourcehub/getting-started/_category_.json new file mode 100644 index 0000000..ed6808f --- /dev/null +++ b/docs/sourcehub/getting-started/_category_.json @@ -0,0 +1,5 @@ +{ + "label": "Getting Started", + "position": 2 + } + \ No newline at end of file diff --git a/docs/sourcehub/networks/_category_.json b/docs/sourcehub/networks/_category_.json new file mode 100644 index 0000000..db0b27b --- /dev/null +++ b/docs/sourcehub/networks/_category_.json @@ -0,0 +1,3 @@ +{ + "label": "Networks" + } \ No newline at end of file diff --git a/docs/sourcehub/networks/testnet-1/_category_.json b/docs/sourcehub/networks/testnet-1/_category_.json new file mode 100644 index 0000000..5ab6ca9 --- /dev/null +++ b/docs/sourcehub/networks/testnet-1/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Testnet 1" + } + \ No newline at end of file diff --git a/docs/sourcehub/networks/testnet-1/join.md b/docs/sourcehub/networks/testnet-1/join.md new file mode 100644 index 0000000..ff0a6c8 --- /dev/null +++ b/docs/sourcehub/networks/testnet-1/join.md @@ -0,0 +1,158 @@ +--- +title: Join +sidebar_position: 2 +--- + +# How to join Testnet 1 +The following will detail the necessary steps to join Testnet 1 as a validator. Only approved validators can join testnet 1 as we have not yet enabled permissionless public validator sets. + +## Hardware Requirements +Firstly any validator joining the network must ensure they have sufficient server hardware to ensure the network meets its current performance targets. + +* x86-64 (amd64) multi-core CPU (AMD / Intel) +* 16GB RAM +* 256GB SSD Storage +* 100Mbps bi-directional Internet connection + +## SourceHub Binary + +### Precompiled +You can get the `sourcehubd` binary from the releases page of the SourceHub repo: https://github.com/sourcenetwork/sourcehub/releases/tag/v0.2.0 +```bash +cd $HOME +wget https://github.com/sourcenetwork/sourcehub/releases/download/v0.2.0/sourcehubd +chmod +x sourcehubd +sudo mv /usr/bin +``` + + +### From Source +You can download the code and compile your own binaries if you prefer. However you will need a local installation of the `go` toolchain at a minimum version of 1.21 +```bash +cd $HOME +git clone https://github.com/sourcenetwork/sourcehub +cd sourcehub +git checkout v0.2.0 +make install +export PATH=$PATH:$GOBIN +``` +Now you will have the `sourcehubd` available in your local system. + +## Initialization +To join the network you need to initiaze your node with a keypair, download the genesis file, and update your configurations. + +```bash +# You must specify your own moniker, which is a label for your node +sourcehubd init --chain-id sourcehub-testnet1 + +# Download the Genesis +cd $HOME +wget https://raw.githubusercontent.com/sourcenetwork/networks/testnet/testnet1/genesis.json +mv genesis.json $HOME/.sourcehub/config/genesis.json + +# Update your configuration +cd $HOME/.sourcehub/config +sed -i 's/minimum-gas-prices = ""/minimum-gas-prices = "0.001uopen"/' app.toml +sed -i 's/persistent_peers = ""/persistent_peers = "2da42ce7b32cb76c3a86db2eadfab8508ee41815@54.158.208.103:26656"/' config.toml + +# Update timeouts +sed -i 's/timeout_propose = "3s"/timeout_propose = "500ms"/' config.toml +sed -i 's/timeout_commit = "5s"/timeout_commit = "1s"/' config.toml +``` + +### State Sync (Recommended) +At this point you can start your node and it will begin syncing with the rest of the network starting from height 0. However, this process can take several hours to complete. Instead nodes can use the ***much*** faster State Sync system that automatically downloads a snapshot from other nodes at a specific trusted height, and will sync from this point onwards. This process only takes a couple of minutes. + +```bash +cd $HOME/.sourcehubd/config +# Configure Trusted blocks +sed -i 's/enable = false/enable = true/' config.toml +sed -i 's/trust_height = 0/trust_height = /' config.toml +sed -i 's/trust_hash = ""/trust_hash = ""/' config.toml +sed -i 's/rpc_servers = ""/rpc_servers = "http:\/\/rpc1.testnet1.source.network:26657,http:\/\/rpc2.testnet1.source.network:26657"/' config.toml + +# Download snapshot +cd $HOME +export BLOCK_HEIGHT= +wget https://sourcehub-snapshot.s3.amazonaws.com/testnet-1/$BLOCK_HEIGHT-3.tar.gz +sourcehubd snapshots load $BLOCK_HEIGHT-3.tar.gz +sourcehubd snapshots restore $BLOCK_HEIGHT 3 +sourcehubd comet bootstrap-state +``` + +You can get the `` and `` from the `#validator-info` channel in the Validator section of the [Source Network Discord](https://discord.source.network) + +### SystemD Service (Optional) + +Create the following file: `/etc/systemd/system/sourcehubd.service` +```bash +[Unit] +Description=SourceHub service +After=network-online.target + +[Service] +User= +ExecStart=//sourcehubd start --x-crisis-skip-assert-invariants +Restart=no +LimitNOFILE=4096 + +[Install] +WantedBy=multi-user.target +``` + +You must specify/edit the `` and `` of your system in the SystemD service file. + +#### Start the service + +```bash +# Restart SystemD +systemctl daemon-reload +systemctl restart systemd-journald + +# Start the SourceHub service +systemctl enable sourcehubd.service +systemctl start sourcehubd.service +``` + +To follow the service log, run `journalctl -fu sourcehubd` + +## Register your validator +Once you have your node running and synchronized with the rest of the network you can register as a Validator. + +First, we want to create a local keypair. This keypair is independant of your validator, and can exist on any node, but we need one to submit transactions to the network, like the `create-validator` transaction. +```bash +sourcehubd keys add +``` + +Make sure to backup the newly created keypair. Then, go to the Source Network [Faucet](https://faucet.source.network/) and get some `$OPEN` tokens so you can pay for transaction gas. + +You also need to post your key address to the `#validator-general` chat on the [Source Network Discord](https://discord.source.network) so you can recieve your minimum `stake` tokens. These stake tokens are used to determine voting power in the network, and are seperate from the `$OPEN` tokens used for gas. + +Once you have recieved your `stake` tokens from the Source Network team you can create you validator wit the following commands. + +```bash +# Create Validator info json config +# Update the moniker, website, security, and details +cd $HOME +echo "{ + \"pubkey\": $(sourcehubd comet show-validator), + \"amount\": \"1stake\", + \"moniker\": \"\", + \"website\": \"validator's (optional) website\", + \"security\": \"validator's (optional) security contact email\", + \"details\": \"validator's (optional) details\", + \"commission-rate\": \"0\", + \"commission-max-rate\": \"0\", + \"commission-max-change-rate\": \"0\", + \"min-self-delegation\": \"1\" +}" > validator.json + +# Create validator transaction +sourcehubd tx staking create-validator validator.json --from= --fees 1000uopen -y +``` + +Where the `` is the same key you made from above. + +If the transaction is successful, you now have an *in-active* validator on SourceHub Testnet 1. To become active, you must post your validator address `$> sourcehubd comet show-address` in the `#validator-general` chat on the [Source Network Discord](https://discord.source.network), and we will delegate voting power to you, which will move you into the *active* validator set, and you'll node will start producing and verifying blocks. + +> The SourceHub Testnet 1 is a public but permissioned network, meaning only approved validators can join the network. This is guranteed by the fact that Source owns 100% of the staking power of the network. diff --git a/docs/sourcehub/networks/testnet-1/overview.md b/docs/sourcehub/networks/testnet-1/overview.md new file mode 100644 index 0000000..6c379fd --- /dev/null +++ b/docs/sourcehub/networks/testnet-1/overview.md @@ -0,0 +1,155 @@ +--- +title: Overview +sidebar_position: 1 +--- +# Testnet Overview + +## Introduction +Testnets are essential for developing and testing blockchain networks before they go live on the mainnet. They consist of nodes running specific blockchain protocols and modules that enhance the functionality of the DefraDB. + +The SourceHub protocol operates continuously, with decentralized and cryptographically designed API endpoints. Its purpose is to offer developers a condensed representation of the complete stack. This article explores key aspects of Source's Testnet1, its significance, deployment, and management. + +## Core Components of the Protocol +The Source Network ecosystem protocol, based on CosmosSDK and utilizing CometBFT as its consensus mechanism in the SourceHub Testnet is driven by two interconnected components: + +1. **Database**: Tailored for off-chain data querying, featuring a graph-like topple structure. + +2. **Trust Protocol**: Operating on-chain, complementing the database with specific features. + +These components form the foundational core of Source Network and comprise of four essential modules: + +1. **Access control**: A general-purpose black box for the off-chain database, using a graph-like topple. + +2. **Secret management**: Authorizing nodes through encryption and cryptography, with the Orbis secret management engine utilizing decentralized key pairs. + +3. **Anchoring**: Facilitating strict event authoring for timestamping data, essential for blockchain's time-stamping service. + +4. **Auditing**: Ensuring transparency and integrity within the protocol. + +## Core Features and Goals of Testnet1 + +SourceHub Testnet1 serves as the initial MVP, featuring fundamental components necessary for application development. It provides an early-stage interaction with the protocol, excluding aspects like Tokenomics and advanced functionalities such as Secret Management and Authoring. +  + +Key Points: + +1. **Comprehensive MVP** - Testnet1 has limitations in ACP, Orbis, and Anchoring, alongside the absence of token economics, serving as a comprehensive MVP for the complete Source Network Stack. + +2. **Permissioned Nature**: The network's permissioned nature, limited validator count, and the need for whitelisting underscore Testnet1's exclusive participation criteria. + +3. **Developer Engagement**: Active developer participation is encouraged, with an emphasis on feedback and monitoring system performance metrics. + +4. **Public Fairly Permissioned Network**: Testnet1 is positioned as a public fairly permissioned network, requiring whitelisting for SourceHub participation. + +5. **Chain Independence**: The absence of IPC chains or connections to the chain highlights Testnet1's independent operational framework. + +6. **Interoperability Focus**: The emphasis is on enhancing interoperability beyond the initial Testnet1 phase. This enables the Source ecosystem to interact and exchange information with other blockchain networks. And also provide a more connected and decentralized ecosystem. + +The ultimate goal is the seamless progression through Testnet1 and towards the Mainnet, with ongoing evolution in core components, exposed features, and adjusted node requirements. + +This progression involves considerations such as gRPCs (Mainnet) endpoints, P2P connectivity, and dependencies on the BFT-based CosmosSDK protocol. + +``` +grpc: +  grpcURL: "0.0.0.0:8081" +  restURL: "0.0.0.0:8091" +  logging: true +``` + +## Token Distribution and Developer Support +For testing purposes, developers require tokens in their local wallets, obtained through a faucet. These tokens, not actual coins, facilitate transactions within the testnet and cover gas fees. A user-friendly faucet serves as an API endpoint/tool for accessing dummy money. + +## Running a Node + +### Hardware Requirements +For effective participation in the Testnet, certain hardware specifications are essential for running a node as a validator. The key considerations encompass network capability, persistence, and connections. The specified hardware requirements include: + +* x86-64 (amd64) multi-core CPU (AMD / Intel) +* 16GB RAM +* 256GB SSD Storage +* 100Mbps bi-directional Internet connection + +### Installation +The installation is divided into two components: + +1. SourceHub Daemon (SourceHub D): This is also referred to as node software, and is interchangeable with the SourceHub Daemon. The binaries for SourceHub D are named `sourcehubd`. + +2. Orbis Daemon (Orbis): The binaries for Orbis are named `orbisd`. + +  + +There are different options for installation, they are: + +* **Build from Source Network**: To install with this method, follow these steps: + + ``` + git clone [repo] + git checkout [specific version tag] + make build [this does all compilation necessary for the chain] + run orbis d version [to test if the installation works] + ``` +  + +* **Pre-compiled Binaries**: The released binaries are available on the GitHub release pages. + +* **Docker Image**: This method of installing involves building a Docker image. + +### Configuration +Configuration for running SourceHub and Orbis includes settings for the daemon and requires matching local machine IP addresses for Node and RPC addresses. + +Specific values like address prefix and account name must be obtained from local configurations. + +  + +**Note**: +* Specific values for address prefix and account name need to be obtained from your local configuration. + +```yaml +cosmos: + # Cosmos chain ID + chainId: sourcehub-testnet1 + # Cosmos keyring key name + accountName: + # Cosmos keyring backend ('os' is the default when running sourcehubd) + keyringBackend: os + # Cosmos address prefix ('source' is for SourceHub) + addressPrefix: source + # Transaction fees + fees: 2000uopen + rpcAddress: +``` + +* Remove the crypto section (specifically host_crypto_seed) from the config file. +``` + host: + listenAddresses: + - /ip4/0.0.0.0/tcp/9001 +``` + +  + +* Update the config file accordingly. + +## Public Bootstrapping Nodes and RPC Endpoints +Public bootstrapping nodes and RPC endpoints are essential components for network initiation. For SourceHub, the hardware requirements align with those for CosmosSDK chains, making CometBFT compatible. + +## Validators + +### Validator Role and Requirements +Validators play a crucial role in the blockchain by committing new blocks through an automated voting process. If a validator becomes unavailable or sign blocks at the same height, their stake faces the risk of being slashed. + +Maintaining and monitoring validator nodes is crucial for optimal performance. Interested participants can join the waitlist to become a validator. + +### How to Become a Testnet Validator +To become a Testnet validator, follow these steps: + +* Run a Full Node - Ensure synchronization with the network. +* Fund Your Wallet - Use project tokens to delegate funds to your validator. +* Create a Validator - Execute the relevant command. +* Confirm Validator Creation - Verify the successful creation of the validator. +* Import CLI Commands - Follow the necessary CLI commands for continued participation. + +## Conclusion +Source's Testnet1 serves as a foundational MVP, offering developers a preview of core components—Access Control, Secret Management, Document Anchoring and Auditing—while acknowledging limitations and emphasizing the network's permissioned nature. + +The outlined hardware requirements and steps for becoming a validator provide a clear pathway for interested participants. \ No newline at end of file diff --git a/docs/sourcehub/overview.md b/docs/sourcehub/overview.md new file mode 100644 index 0000000..9fe7e05 --- /dev/null +++ b/docs/sourcehub/overview.md @@ -0,0 +1,23 @@ +--- +sidebar_position: 1 +title: Overview +slug: /sourcehub +--- +# SourceHub Overview + +![SourceHub Overview](/img/sourcehub-cover-copy.png) + +SourceHub is the Source Network's trust protocol, which facilitates trusted and authenticated sharing and collaboration of data across the network and beyond. It utilizes [CometBFT](https://cometbft.com/) consensus and is built on the [Cosmos SDK](link) to provide us with a solid technical foundation to enable our decentralized infrastructure and application-specific chain. + +The primary functions of SourceHub are: +- **ACP Module**: A decentralized authorization engine, inspired in part by [Google Zanzibar's](/sourcehub/concepts/zanzibar) Relational Based Access Control (RelBAC) + +- **Bulletin Module**: A trust minimized network broadcast hub. Used both for DefraDBs Document Anchoring and [Orbis's](/orbis) [Multi-Party Computation (MPC)](https://en.wikipedia.org/wiki/Secure_multi-party_computation) to optimize its network communication to initialize and maintain the [Distributed Key Generation (DKG)](https://en.wikipedia.org/wiki/Distributed_key_generation) + +- **Developer-Lock Tier Module** (:construction: coming soon :construction:): A SaaS inspired pricing module to streamline DevEx around tokenomics, abstract transaction gas, and simplify user wallet experience. Similar to [Account Abstraction](https://ethereum.org/en/roadmap/account-abstraction/) systems, but native to our protocol. + +--- + +Although SourceHub is an independent system with self-contained functionality - like the rest of the Source Stack - it is designed to work in conjunction with [DefraDB](/defradb) nodes to help facilitate trust in its peer-to-peer architecture. + +![SourceHub+DefraDB](/img/sourcehub/trust-protocol-defradb.png) \ No newline at end of file diff --git a/docusaurus.config.js b/docusaurus.config.js index badb1eb..80c1124 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -1,8 +1,7 @@ // @ts-check // Note: type annotations allow type checking and IDEs autocompletion -const lightCodeTheme = require("./src/code-theme/code-theme-light"); -const darkCodeTheme = require("prism-react-renderer/themes/oceanicNext"); +const variableCodeTheme = require("./src/code-theme/code-theme"); /** @type {import('@docusaurus/types').Config} */ const config = { @@ -17,14 +16,27 @@ const config = { projectName: "source-developer", // Usually your repo name. presets: [ [ - "classic", - /** @type {import('@docusaurus/preset-classic').Options} */ + "docusaurus-preset-openapi", + /** @type {import('docusaurus-preset-openapi').Options} */ ({ + api: { + path: "openapi.yml", + routeBasePath: "/sourcehub/api", + }, docs: { routeBasePath: "/", sidebarPath: require.resolve("./sidebars.js"), editUrl: "https://github.com/sourcenetwork/docs.source.network/edit/master/", + + // Reorder changelog sidebar + async sidebarItemsGenerator({ + defaultSidebarItemsGenerator, + ...args + }) { + const sidebarItems = await defaultSidebarItemsGenerator(args); + return reverseSidebarChangelog(sidebarItems); + }, }, theme: { customCss: require.resolve("./src/css/custom.scss"), @@ -41,23 +53,45 @@ const config = { }, }, colorMode: { - respectPrefersColorScheme: true, + respectPrefersColorScheme: false, + defaultMode: "dark", }, navbar: { title: null, hideOnScroll: false, logo: { alt: "Source Network Documentation", - src: "img/source-docs-full-light.svg", - srcDark: "img/source-docs-full-dark.svg", + src: "img/source-docs-logo_v2.svg", + srcDark: "img/source-docs-logo-w_v2.svg", }, items: [ { type: "docSidebar", position: "left", - sidebarId: "mainSidebar", - label: "Docs", - className: "header-docs-link", + sidebarId: "defraSidebar", + label: "DefraDB", + className: "header-docs-link-defra", + }, + { + type: "docSidebar", + position: "left", + sidebarId: "sourcehubSidebar", + label: "SourceHub", + className: "header-docs-link-sourcehub", + }, + { + type: "docSidebar", + position: "left", + sidebarId: "orbisSidebar", + label: "Orbis", + className: "header-docs-link-orbis", + }, + { + type: "docSidebar", + position: "left", + sidebarId: "lensvmSidebar", + label: "LensVM", + className: "header-docs-link-lensvm", }, { href: "https://github.com/sourcenetwork/docs.source.network", @@ -70,8 +104,8 @@ const config = { footer: { logo: { alt: "Facebook Open Source Logo", - src: "img/source-full-light.svg", - srcDark: "img/source-full-dark.svg", + src: "img/source-logo_v2.svg", + srcDark: "img/source-logo-w_v2.svg", href: "https://source.network/", }, links: [ @@ -122,14 +156,39 @@ const config = { copyright: `Copyright © ${new Date().getFullYear()} Source, Inc & Democratized Data Foundation. Built with Docusaurus.`, }, prism: { - theme: lightCodeTheme, - darkTheme: darkCodeTheme, + theme: variableCodeTheme, + }, + algolia: { + appId: "N3M9YBYYQY", + apiKey: "909584ed5214e2d24ae2a85a5cd8664a", + indexName: "source-docs", }, }), - plugins: ["docusaurus-plugin-sass"], + plugins: [ + [ + "docusaurus-plugin-sass", + { + sassOptions: { + includePaths: ["./src/css"], + }, + }, + ], + ], customFields: { docsData: {}, }, }; module.exports = config; + +// Reverse the sidebar items ordering (including nested category items) +function reverseSidebarChangelog(items) { + // Reverse items in categories + const result = items.map((item) => { + if (item.type === "category" && item.label == "Release Notes") { + return { ...item, items: item.items.reverse() }; + } + return item; + }); + return result; +} diff --git a/openapi.yml b/openapi.yml new file mode 100644 index 0000000..b753865 --- /dev/null +++ b/openapi.yml @@ -0,0 +1,7539 @@ +openapi: 3.0.1 +info: + title: HTTP API Console + description: "" + version: 0.2.0 +servers: +- url: http://rpc1.testnet1.source.network:1317/ +tags: +- name: ACP + description: Access Control Policy (ACP) message types +- name: Bulletin + description: Bulletin module message types +paths: + /sourcehub.acp.Msg/CheckAccess: + post: + tags: + - ACP + summary: CheckAccess executes an Access Request for an User + description: |- + The resulting evaluation is stored in SourceHub. It's used to generate a cryptographic proof that the given Access Request + was valid at a particular block height. + operationId: SourcehubAcpMsg_CheckAccess + requestBody: + content: + '*/*': + schema: + type: object + properties: + creator: + type: string + policy_id: + type: string + creation_time: + type: string + format: date-time + access_request: + title: AccessRequest represents the wish to perform a set of operations + by an actor + type: object + properties: + operations: + type: array + items: + type: object + properties: + object: + title: target object for operation + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be + access controlled within a Policy. + permission: + title: permission required to perform operation + type: string + description: Operation represents an action over an object. + actor: + title: actor requesting operations + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access requests + to a Policy. + required: true + responses: + "200": + description: A successful response. + content: + '*/*': + schema: + type: object + properties: + decision: + title: AccessDecision models the result of evaluating a set of + AccessRequests for an Actor + type: object + properties: + id: + type: string + policy_id: + title: used as part of id generation + type: string + creator: + title: used as part of id generation + type: string + creator_acc_sequence: + title: used as part of id generation + type: string + format: uint64 + operations: + title: used as part of id generation + type: array + items: + type: object + properties: + object: + title: target object for operation + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must + be access controlled within a Policy. + permission: + title: permission required to perform operation + type: string + description: Operation represents an action over an object. + actor_did: + title: used as part of id generation + type: string + actor: + title: used as part of id generation + type: string + params: + title: used as part of id generation + type: object + properties: + decision_expiration_delta: + title: number of blocks a Decision is valid for + type: string + format: uint64 + proof_expiration_delta: + title: number of blocks a DecisionProof is valid for + type: string + format: uint64 + ticket_expiration_delta: + title: number of blocks an AccessTicket is valid for + type: string + format: uint64 + creation_time: + type: string + format: date-time + issued_height: + title: issued_height stores the block height when the Decision + was evaluated + type: string + format: uint64 + default: + description: An unexpected error response. + content: + '*/*': + schema: + type: object + properties: + code: + type: integer + format: int32 + message: + type: string + details: + type: array + items: + type: object + additionalProperties: + type: object + x-codegen-request-body-name: body + /sourcehub.acp.Msg/CreatePolicy: + post: + tags: + - ACP + summary: |- + CreatePolicy adds a new Policy. + description: |- + The Policy models an aplication's high level access control rules. + operationId: SourcehubAcpMsg_CreatePolicy + requestBody: + content: + '*/*': + schema: + type: object + properties: + creator: + type: string + policy: + type: string + marshal_type: + type: string + description: |- + PolicyEncodingType enumerates supported marshaling types for policies. + + - UNKNOWN: Fallback value for a missing Marshaling Type + - SHORT_YAML: Policy Marshaled as a YAML Short Policy definition + - SHORT_JSON: Policy Marshaled as a JSON Short Policy definition + default: UNKNOWN + enum: + - UNKNOWN + - SHORT_YAML + - SHORT_JSON + creation_time: + type: string + format: date-time + required: true + responses: + "200": + description: A successful response. + content: + '*/*': + schema: + type: object + properties: + policy: + type: object + properties: + id: + type: string + name: + type: string + description: + type: string + creation_time: + type: string + format: date-time + attributes: + type: object + additionalProperties: + type: string + resources: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + permissions: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + expression: + type: string + description: |- + Permission models a special type of Relation which is evaluated at runtime. + A permission often maps to an operation defined for a resource which an actor may attempt. + relations: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current + relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible + actors resource + type: string + relation_name: + title: relation_name scopes permissible + actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + description: |- + Resource models a namespace for objects in a Policy. + Appications will have multiple entities which they must manage such as files or groups. + A Resource represents a set of entities of a certain type. + actor_resource: + type: object + properties: + name: + type: string + doc: + type: string + relations: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current + relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors + resource + type: string + relation_name: + title: relation_name scopes permissible actors + relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + description: ActorResource represents a special Resource which + is reserved for Policy actors. + creator: + type: string + description: |- + Policy represents an ACP module Policy definition. + Each Policy defines a set of high level rules over how the acces control system + should behave. + default: + description: An unexpected error response. + content: + '*/*': + schema: + type: object + properties: + code: + type: integer + format: int32 + message: + type: string + details: + type: array + items: + type: object + additionalProperties: + type: object + x-codegen-request-body-name: body + /sourcehub.acp.Msg/DeleteRelationship: + post: + tags: + - ACP + summary: |- + DelereRelationship removes a Relationship from a Policy. + description: |- + If the Relationship was not found in a Policy, this Msg is a no-op. + operationId: SourcehubAcpMsg_DeleteRelationship + requestBody: + content: + '*/*': + schema: + type: object + properties: + creator: + type: string + policy_id: + type: string + relationship: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access + controlled within a Policy. + relation: + type: string + subject: + type: object + properties: + actor: + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access + requests to a Policy. + actor_set: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must + be access controlled within a Policy. + relation: + type: string + description: |- + ActorSet represents a set of Actors in a Policy. + It is specified through an Object, Relation pair, which represents + all actors which have a relationship with given obj-rel pair. + This expansion is recursive. + all_actors: + type: object + properties: {} + description: |- + AllActors models a special Relationship Subject which indicates + that all Actors in the Policy are included. + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access + controlled within a Policy. + description: Subject specifies the target of a Relationship. + description: |- + Relationship models an access control rule. + It states that the given subject has relation with object. + required: true + responses: + "200": + description: A successful response. + content: + '*/*': + schema: + type: object + properties: + record_found: + type: boolean + default: + description: An unexpected error response. + content: + '*/*': + schema: + type: object + properties: + code: + type: integer + format: int32 + message: + type: string + details: + type: array + items: + type: object + additionalProperties: + type: object + x-codegen-request-body-name: body + /sourcehub.acp.Msg/RegisterObject: + post: + tags: + - ACP + summary: RegisterObject creates a Relationship within a Policy. + description: |- + The Owner has complete control over the set of subjects that are related + to their Object, + + giving them autonomy to share the object and revoke acces to the object, + + much like owners in a Discretionary Access Control model. + + Attempting to register a previously registered Object is an error, + Object IDs are therefore assumed to be unique within a Policy. + operationId: SourcehubAcpMsg_RegisterObject + requestBody: + content: + '*/*': + schema: + type: object + properties: + creator: + type: string + policy_id: + type: string + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + creation_time: + type: string + format: date-time + required: true + responses: + "200": + description: A successful response. + content: + '*/*': + schema: + type: object + properties: + result: + title: RegistrationResult encodes the possible result set from + Registering an Object + type: string + description: |- + - NoOp: NoOp indicates no action was take. The operation failed or the Object already existed and was active + - Registered: Registered indicates the Object was sucessfuly registered to the Actor. + - Unarchived: Unarchived indicates that a previously deleted Object is active again. + Only the original owners can Unarchive an object. + default: NoOp + enum: + - NoOp + - Registered + - Unarchived + default: + description: An unexpected error response. + content: + '*/*': + schema: + type: object + properties: + code: + type: integer + format: int32 + message: + type: string + details: + type: array + items: + type: object + additionalProperties: + type: object + x-codegen-request-body-name: body + /sourcehub.acp.Msg/SetRelationship: + post: + tags: + - ACP + summary: SetRelationship creates or updates a Relationship within a Policy. + description: |- + A Relationship is a statement which ties together an object and a + subjecto with a "relation", + + which means the set of high level rules defined in the Policy will apply + to these entities. + operationId: SourcehubAcpMsg_SetRelationship + requestBody: + content: + '*/*': + schema: + type: object + properties: + creator: + type: string + policy_id: + type: string + creation_time: + type: string + format: date-time + relationship: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access + controlled within a Policy. + relation: + type: string + subject: + type: object + properties: + actor: + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access + requests to a Policy. + actor_set: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must + be access controlled within a Policy. + relation: + type: string + description: |- + ActorSet represents a set of Actors in a Policy. + It is specified through an Object, Relation pair, which represents + all actors which have a relationship with given obj-rel pair. + This expansion is recursive. + all_actors: + type: object + properties: {} + description: |- + AllActors models a special Relationship Subject which indicates + that all Actors in the Policy are included. + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access + controlled within a Policy. + description: Subject specifies the target of a Relationship. + description: |- + Relationship models an access control rule. + It states that the given subject has relation with object. + required: true + responses: + "200": + description: A successful response. + content: + '*/*': + schema: + type: object + properties: + record_existed: + title: "Indicates whether the given Relationship previously existed,\ + \ ie the Tx was a no op" + type: boolean + default: + description: An unexpected error response. + content: + '*/*': + schema: + type: object + properties: + code: + type: integer + format: int32 + message: + type: string + details: + type: array + items: + type: object + additionalProperties: + type: object + x-codegen-request-body-name: body + /sourcehub.acp.Msg/UnregisterObject: + post: + tags: + - ACP + summary: UnregisterObject "unshare" a Object. + description: |- + A caveat is that after removing the Relationships, a record of the original Object owner + is maintained to prevent an "ownership hijack" attack. + + Suppose Bob owns object Foo, which is shared with Bob but not Eve. + Eve wants to access Foo but was not given permission to, they could "hijack" Bob's object by waiting for Bob to Unregister Foo, + then submitting a RegisterObject Msg, effectively becoming Foo's new owner. + If Charlie has a copy of the object, Eve could convince Charlie to share his copy, granting Eve access to Foo. + The previous scenario where an unauthorized user is able to claim ownership to data previously unaccessible to them + is an "ownership hijack". + operationId: SourcehubAcpMsg_UnregisterObject + requestBody: + content: + '*/*': + schema: + type: object + properties: + creator: + type: string + policy_id: + type: string + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + required: true + responses: + "200": + description: A successful response. + content: + '*/*': + schema: + type: object + properties: + found: + type: boolean + default: + description: An unexpected error response. + content: + '*/*': + schema: + type: object + properties: + code: + type: integer + format: int32 + message: + type: string + details: + type: array + items: + type: object + additionalProperties: + type: object + x-codegen-request-body-name: body + /sourcehub.bulletin.Msg/CreatePost: + post: + tags: + - Bulletin + summary: Post to the Bulletin + operationId: SourcehubBulletinMsg_CreatePost + requestBody: + content: + '*/*': + schema: + type: object + properties: + creator: + type: string + namespace: + type: string + payload: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + required: true + responses: + "200": + description: A successful response. + content: + '*/*': + schema: + type: object + default: + description: An unexpected error response. + content: + '*/*': + schema: + type: object + properties: + code: + type: integer + format: int32 + message: + type: string + details: + type: array + items: + type: object + additionalProperties: + type: object + x-codegen-request-body-name: body +components: + schemas: + cosmos.auth.v1beta1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: + max_memo_characters: + type: string + format: uint64 + tx_sig_limit: + type: string + format: uint64 + tx_size_cost_per_byte: + type: string + format: uint64 + sig_verify_cost_ed25519: + type: string + format: uint64 + sig_verify_cost_secp256k1: + type: string + format: uint64 + description: |- + params defines the x/auth parameters to update. + + NOTE: All parameters must be supplied. + description: |- + MsgUpdateParams is the Msg/UpdateParams request type. + + Since: cosmos-sdk 0.47 + cosmos.auth.v1beta1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + + Since: cosmos-sdk 0.47 + cosmos.auth.v1beta1.Params: + type: object + properties: + max_memo_characters: + type: string + format: uint64 + tx_sig_limit: + type: string + format: uint64 + tx_size_cost_per_byte: + type: string + format: uint64 + sig_verify_cost_ed25519: + type: string + format: uint64 + sig_verify_cost_secp256k1: + type: string + format: uint64 + description: Params defines the parameters for the auth module. + google.protobuf.Any: + type: object + properties: + '@type': + type: string + description: |- + A URL/resource name that uniquely identifies the type of the serialized + protocol buffer message. This string must contain at least + one "/" character. The last segment of the URL's path must represent + the fully qualified name of the type (as in + `path/google.protobuf.Duration`). The name should be in a canonical form + (e.g., leading "." is not accepted). + + In practice, teams usually precompile into the binary all types that they + expect it to use in the context of Any. However, for URLs which use the + scheme `http`, `https`, or no scheme, one can optionally set up a type + server that maps type URLs to message definitions as follows: + + * If no scheme is provided, `https` is assumed. + * An HTTP GET on the URL must yield a [google.protobuf.Type][] + value in binary format, or produce an error. + * Applications are allowed to cache lookup results based on the + URL, or have them precompiled into a binary to avoid any + lookup. Therefore, binary compatibility needs to be preserved + on changes to types. (Use versioned type names to manage + breaking changes.) + + Note: this functionality is not currently available in the official + protobuf release, and it is not used for type URLs beginning with + type.googleapis.com. + + Schemes other than `http`, `https` (or the empty scheme) might be + used with implementation specific semantics. + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + google.rpc.Status: + type: object + properties: + code: + type: integer + format: int32 + message: + type: string + details: + type: array + items: + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + cosmos.authz.v1beta1.Grant: + type: object + properties: + authorization: + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + expiration: + title: |- + time when the grant will expire and will be pruned. If null, then the grant + doesn't have a time expiration (other conditions in `authorization` + may apply to invalidate the grant) + type: string + format: date-time + description: |- + Grant gives permissions to execute + the provide method with expiration time. + cosmos.authz.v1beta1.MsgExec: + type: object + properties: + grantee: + type: string + msgs: + type: array + description: |- + Execute Msg. + The x/authz will try to find a grant matching (msg.signers[0], grantee, MsgTypeURL(msg)) + triple and validate it. + items: + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + description: |- + MsgExec attempts to execute the provided messages using + authorizations granted to the grantee. Each message should have only + one signer corresponding to the granter of the authorization. + cosmos.authz.v1beta1.MsgExecResponse: + type: object + properties: + results: + type: array + items: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: MsgExecResponse defines the Msg/MsgExecResponse response type. + cosmos.authz.v1beta1.MsgGrant: + type: object + properties: + granter: + type: string + grantee: + type: string + grant: + type: object + properties: + authorization: + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + expiration: + title: |- + time when the grant will expire and will be pruned. If null, then the grant + doesn't have a time expiration (other conditions in `authorization` + may apply to invalidate the grant) + type: string + format: date-time + description: |- + Grant gives permissions to execute + the provide method with expiration time. + description: |- + MsgGrant is a request type for Grant method. It declares authorization to the grantee + on behalf of the granter with the provided expiration time. + cosmos.authz.v1beta1.MsgGrantResponse: + type: object + description: MsgGrantResponse defines the Msg/MsgGrant response type. + cosmos.authz.v1beta1.MsgRevoke: + type: object + properties: + granter: + type: string + grantee: + type: string + msg_type_url: + type: string + description: |- + MsgRevoke revokes any authorization with the provided sdk.Msg type on the + granter's account with that has been granted to the grantee. + cosmos.authz.v1beta1.MsgRevokeResponse: + type: object + description: MsgRevokeResponse defines the Msg/MsgRevokeResponse response type. + cosmos.bank.v1beta1.Input: + type: object + properties: + address: + type: string + coins: + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: Input models transaction input. + cosmos.bank.v1beta1.MsgMultiSend: + type: object + properties: + inputs: + type: array + description: |- + Inputs, despite being `repeated`, only allows one sender input. This is + checked in MsgMultiSend's ValidateBasic. + items: + type: object + properties: + address: + type: string + coins: + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: Input models transaction input. + outputs: + type: array + items: + type: object + properties: + address: + type: string + coins: + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: Output models transaction outputs. + description: "MsgMultiSend represents an arbitrary multi-in, multi-out send\ + \ message." + cosmos.bank.v1beta1.MsgMultiSendResponse: + type: object + description: MsgMultiSendResponse defines the Msg/MultiSend response type. + cosmos.bank.v1beta1.MsgSend: + type: object + properties: + from_address: + type: string + to_address: + type: string + amount: + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: MsgSend represents a message to send coins from one account to + another. + cosmos.bank.v1beta1.MsgSendResponse: + type: object + description: MsgSendResponse defines the Msg/Send response type. + cosmos.bank.v1beta1.MsgSetSendEnabled: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module. + send_enabled: + type: array + description: send_enabled is the list of entries to add or update. + items: + type: object + properties: + denom: + type: string + enabled: + type: boolean + description: |- + SendEnabled maps coin denom to a send_enabled status (whether a denom is + sendable). + use_default_for: + type: array + description: |- + use_default_for is a list of denoms that should use the params.default_send_enabled value. + Denoms listed here will have their SendEnabled entries deleted. + If a denom is included that doesn't have a SendEnabled entry, + it will be ignored. + items: + type: string + description: |- + MsgSetSendEnabled is the Msg/SetSendEnabled request type. + + Only entries to add/update/delete need to be included. + Existing SendEnabled entries that are not included in this + message are left unchanged. + + Since: cosmos-sdk 0.47 + cosmos.bank.v1beta1.MsgSetSendEnabledResponse: + type: object + description: |- + MsgSetSendEnabledResponse defines the Msg/SetSendEnabled response type. + + Since: cosmos-sdk 0.47 + cosmos.bank.v1beta1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: + send_enabled: + type: array + description: |- + Deprecated: Use of SendEnabled in params is deprecated. + For genesis, use the newly added send_enabled field in the genesis object. + Storage, lookup, and manipulation of this information is now in the keeper. + + As of cosmos-sdk 0.47, this only exists for backwards compatibility of genesis files. + items: + type: object + properties: + denom: + type: string + enabled: + type: boolean + description: |- + SendEnabled maps coin denom to a send_enabled status (whether a denom is + sendable). + default_send_enabled: + type: boolean + description: |- + params defines the x/bank parameters to update. + + NOTE: All parameters must be supplied. + description: |- + MsgUpdateParams is the Msg/UpdateParams request type. + + Since: cosmos-sdk 0.47 + cosmos.bank.v1beta1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + + Since: cosmos-sdk 0.47 + cosmos.bank.v1beta1.Output: + type: object + properties: + address: + type: string + coins: + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: Output models transaction outputs. + cosmos.bank.v1beta1.Params: + type: object + properties: + send_enabled: + type: array + description: |- + Deprecated: Use of SendEnabled in params is deprecated. + For genesis, use the newly added send_enabled field in the genesis object. + Storage, lookup, and manipulation of this information is now in the keeper. + + As of cosmos-sdk 0.47, this only exists for backwards compatibility of genesis files. + items: + type: object + properties: + denom: + type: string + enabled: + type: boolean + description: |- + SendEnabled maps coin denom to a send_enabled status (whether a denom is + sendable). + default_send_enabled: + type: boolean + description: Params defines the parameters for the bank module. + cosmos.bank.v1beta1.SendEnabled: + type: object + properties: + denom: + type: string + enabled: + type: boolean + description: |- + SendEnabled maps coin denom to a send_enabled status (whether a denom is + sendable). + cosmos.base.v1beta1.Coin: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + cosmos.base.node.v1beta1.ConfigResponse: + type: object + properties: + minimum_gas_price: + type: string + pruning_keep_recent: + title: pruning settings + type: string + pruning_interval: + type: string + description: ConfigResponse defines the response structure for the Config gRPC + query. + cosmos.base.node.v1beta1.StatusResponse: + type: object + properties: + earliest_store_height: + title: earliest block height available in the store + type: string + format: uint64 + height: + title: current block height + type: string + format: uint64 + timestamp: + title: block height timestamp + type: string + format: date-time + app_hash: + title: app hash of the current block + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + validator_hash: + title: validator hash provided by the consensus header + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: StateResponse defines the response structure for the status of + a node. + cosmos.consensus.v1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + block: + type: object + properties: + max_bytes: + title: |- + Max block size, in bytes. + Note: must be greater than 0 + type: string + format: int64 + max_gas: + title: |- + Max gas per block. + Note: must be greater or equal to -1 + type: string + format: int64 + description: |- + params defines the x/consensus parameters to update. + VersionsParams is not included in this Msg because it is tracked + separarately in x/upgrade. + + NOTE: All parameters must be supplied. + evidence: + type: object + properties: + max_age_num_blocks: + type: string + description: |- + Max age of evidence, in blocks. + + The basic formula for calculating this is: MaxAgeDuration / {average block + time}. + format: int64 + max_age_duration: + type: string + description: |- + Max age of evidence, in time. + + It should correspond with an app's "unbonding period" or other similar + mechanism for handling [Nothing-At-Stake + attacks](https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ#what-is-the-nothing-at-stake-problem-and-how-can-it-be-fixed). + max_bytes: + title: |- + This sets the maximum size of total evidence in bytes that can be committed in a single block. + and should fall comfortably under the max block bytes. + Default is 1048576 or 1MB + type: string + format: int64 + description: EvidenceParams determine how we handle evidence of malfeasance. + validator: + type: object + properties: + pub_key_types: + type: array + items: + type: string + description: |- + ValidatorParams restrict the public key types validators can use. + NOTE: uses ABCI pubkey naming, not Amino names. + abci: + title: "Since: cosmos-sdk 0.50" + type: object + properties: + vote_extensions_enable_height: + type: string + description: |- + vote_extensions_enable_height configures the first height during which + vote extensions will be enabled. During this specified height, and for all + subsequent heights, precommit messages that do not contain valid extension data + will be considered invalid. Prior to this height, vote extensions will not + be used or accepted by validators on the network. + + Once enabled, vote extensions will be created by the application in ExtendVote, + passed to the application for validation in VerifyVoteExtension and given + to the application to use when proposing a block during PrepareProposal. + format: int64 + description: ABCIParams configure functionality specific to the Application + Blockchain Interface. + description: MsgUpdateParams is the Msg/UpdateParams request type. + cosmos.consensus.v1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + tendermint.types.ABCIParams: + type: object + properties: + vote_extensions_enable_height: + type: string + description: |- + vote_extensions_enable_height configures the first height during which + vote extensions will be enabled. During this specified height, and for all + subsequent heights, precommit messages that do not contain valid extension data + will be considered invalid. Prior to this height, vote extensions will not + be used or accepted by validators on the network. + + Once enabled, vote extensions will be created by the application in ExtendVote, + passed to the application for validation in VerifyVoteExtension and given + to the application to use when proposing a block during PrepareProposal. + format: int64 + description: ABCIParams configure functionality specific to the Application + Blockchain Interface. + tendermint.types.BlockParams: + type: object + properties: + max_bytes: + title: |- + Max block size, in bytes. + Note: must be greater than 0 + type: string + format: int64 + max_gas: + title: |- + Max gas per block. + Note: must be greater or equal to -1 + type: string + format: int64 + description: BlockParams contains limits on the block size. + tendermint.types.EvidenceParams: + type: object + properties: + max_age_num_blocks: + type: string + description: |- + Max age of evidence, in blocks. + + The basic formula for calculating this is: MaxAgeDuration / {average block + time}. + format: int64 + max_age_duration: + type: string + description: |- + Max age of evidence, in time. + + It should correspond with an app's "unbonding period" or other similar + mechanism for handling [Nothing-At-Stake + attacks](https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ#what-is-the-nothing-at-stake-problem-and-how-can-it-be-fixed). + max_bytes: + title: |- + This sets the maximum size of total evidence in bytes that can be committed in a single block. + and should fall comfortably under the max block bytes. + Default is 1048576 or 1MB + type: string + format: int64 + description: EvidenceParams determine how we handle evidence of malfeasance. + tendermint.types.ValidatorParams: + type: object + properties: + pub_key_types: + type: array + items: + type: string + description: |- + ValidatorParams restrict the public key types validators can use. + NOTE: uses ABCI pubkey naming, not Amino names. + cosmos.crisis.v1beta1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + constant_fee: + type: object + properties: + denom: + type: string + amount: + type: string + description: constant_fee defines the x/crisis parameter. + description: |- + MsgUpdateParams is the Msg/UpdateParams request type. + + Since: cosmos-sdk 0.47 + cosmos.crisis.v1beta1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + + Since: cosmos-sdk 0.47 + cosmos.crisis.v1beta1.MsgVerifyInvariant: + type: object + properties: + sender: + type: string + description: sender is the account address of private key to send coins + to fee collector account. + invariant_module_name: + type: string + description: name of the invariant module. + invariant_route: + type: string + description: invariant_route is the msg's invariant route. + description: MsgVerifyInvariant represents a message to verify a particular + invariance. + cosmos.crisis.v1beta1.MsgVerifyInvariantResponse: + type: object + description: MsgVerifyInvariantResponse defines the Msg/VerifyInvariant response + type. + cosmos.distribution.v1beta1.MsgCommunityPoolSpend: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + recipient: + type: string + amount: + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: |- + MsgCommunityPoolSpend defines a message for sending tokens from the community + pool to another account. This message is typically executed via a governance + proposal with the governance module being the executing authority. + + Since: cosmos-sdk 0.47 + cosmos.distribution.v1beta1.MsgCommunityPoolSpendResponse: + type: object + description: |- + MsgCommunityPoolSpendResponse defines the response to executing a + MsgCommunityPoolSpend message. + + Since: cosmos-sdk 0.47 + cosmos.distribution.v1beta1.MsgDepositValidatorRewardsPool: + type: object + properties: + depositor: + type: string + validator_address: + type: string + amount: + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: |- + DepositValidatorRewardsPool defines the request structure to provide + additional rewards to delegators from a specific validator. + + Since: cosmos-sdk 0.50 + cosmos.distribution.v1beta1.MsgDepositValidatorRewardsPoolResponse: + type: object + description: |- + MsgDepositValidatorRewardsPoolResponse defines the response to executing a + MsgDepositValidatorRewardsPool message. + + Since: cosmos-sdk 0.50 + cosmos.distribution.v1beta1.MsgFundCommunityPool: + type: object + properties: + amount: + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + depositor: + type: string + description: |- + MsgFundCommunityPool allows an account to directly + fund the community pool. + cosmos.distribution.v1beta1.MsgFundCommunityPoolResponse: + type: object + description: MsgFundCommunityPoolResponse defines the Msg/FundCommunityPool + response type. + cosmos.distribution.v1beta1.MsgSetWithdrawAddress: + type: object + properties: + delegator_address: + type: string + withdraw_address: + type: string + description: |- + MsgSetWithdrawAddress sets the withdraw address for + a delegator (or validator self-delegation). + cosmos.distribution.v1beta1.MsgSetWithdrawAddressResponse: + type: object + description: |- + MsgSetWithdrawAddressResponse defines the Msg/SetWithdrawAddress response + type. + cosmos.distribution.v1beta1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: + community_tax: + type: string + base_proposer_reward: + type: string + description: |- + Deprecated: The base_proposer_reward field is deprecated and is no longer used + in the x/distribution module's reward mechanism. + bonus_proposer_reward: + type: string + description: |- + Deprecated: The bonus_proposer_reward field is deprecated and is no longer used + in the x/distribution module's reward mechanism. + withdraw_addr_enabled: + type: boolean + description: |- + params defines the x/distribution parameters to update. + + NOTE: All parameters must be supplied. + description: |- + MsgUpdateParams is the Msg/UpdateParams request type. + + Since: cosmos-sdk 0.47 + cosmos.distribution.v1beta1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + + Since: cosmos-sdk 0.47 + cosmos.distribution.v1beta1.MsgWithdrawDelegatorReward: + type: object + properties: + delegator_address: + type: string + validator_address: + type: string + description: |- + MsgWithdrawDelegatorReward represents delegation withdrawal to a delegator + from a single validator. + cosmos.distribution.v1beta1.MsgWithdrawDelegatorRewardResponse: + type: object + properties: + amount: + title: "Since: cosmos-sdk 0.46" + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: |- + MsgWithdrawDelegatorRewardResponse defines the Msg/WithdrawDelegatorReward + response type. + cosmos.distribution.v1beta1.MsgWithdrawValidatorCommission: + type: object + properties: + validator_address: + type: string + description: |- + MsgWithdrawValidatorCommission withdraws the full commission to the validator + address. + cosmos.distribution.v1beta1.MsgWithdrawValidatorCommissionResponse: + type: object + properties: + amount: + title: "Since: cosmos-sdk 0.46" + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: |- + MsgWithdrawValidatorCommissionResponse defines the + Msg/WithdrawValidatorCommission response type. + cosmos.distribution.v1beta1.Params: + type: object + properties: + community_tax: + type: string + base_proposer_reward: + type: string + description: |- + Deprecated: The base_proposer_reward field is deprecated and is no longer used + in the x/distribution module's reward mechanism. + bonus_proposer_reward: + type: string + description: |- + Deprecated: The bonus_proposer_reward field is deprecated and is no longer used + in the x/distribution module's reward mechanism. + withdraw_addr_enabled: + type: boolean + description: Params defines the set of params for the distribution module. + cosmos.evidence.v1beta1.MsgSubmitEvidence: + type: object + properties: + submitter: + type: string + description: submitter is the signer account address of evidence. + evidence: + type: object + additionalProperties: + type: object + description: evidence defines the evidence of misbehavior. + description: |- + MsgSubmitEvidence represents a message that supports submitting arbitrary + Evidence of misbehavior such as equivocation or counterfactual signing. + cosmos.evidence.v1beta1.MsgSubmitEvidenceResponse: + type: object + properties: + hash: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + description: hash defines the hash of the evidence. + format: byte + description: MsgSubmitEvidenceResponse defines the Msg/SubmitEvidence response + type. + cosmos.feegrant.v1beta1.MsgGrantAllowance: + type: object + properties: + granter: + type: string + description: granter is the address of the user granting an allowance of + their funds. + grantee: + type: string + description: grantee is the address of the user being granted an allowance + of another user's funds. + allowance: + type: object + additionalProperties: + type: object + description: "allowance can be any of basic, periodic, allowed fee allowance." + description: |- + MsgGrantAllowance adds permission for Grantee to spend up to Allowance + of fees from the account of Granter. + cosmos.feegrant.v1beta1.MsgGrantAllowanceResponse: + type: object + description: MsgGrantAllowanceResponse defines the Msg/GrantAllowanceResponse + response type. + cosmos.feegrant.v1beta1.MsgPruneAllowances: + type: object + properties: + pruner: + type: string + description: pruner is the address of the user pruning expired allowances. + description: |- + MsgPruneAllowances prunes expired fee allowances. + + Since cosmos-sdk 0.50 + cosmos.feegrant.v1beta1.MsgPruneAllowancesResponse: + type: object + description: |- + MsgPruneAllowancesResponse defines the Msg/PruneAllowancesResponse response type. + + Since cosmos-sdk 0.50 + cosmos.feegrant.v1beta1.MsgRevokeAllowance: + type: object + properties: + granter: + type: string + description: granter is the address of the user granting an allowance of + their funds. + grantee: + type: string + description: grantee is the address of the user being granted an allowance + of another user's funds. + description: MsgRevokeAllowance removes any existing Allowance from Granter + to Grantee. + cosmos.feegrant.v1beta1.MsgRevokeAllowanceResponse: + type: object + description: MsgRevokeAllowanceResponse defines the Msg/RevokeAllowanceResponse + response type. + cosmos.gov.v1.MsgCancelProposal: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + proposer: + type: string + description: proposer is the account address of the proposer. + description: |- + MsgCancelProposal is the Msg/CancelProposal request type. + + Since: cosmos-sdk 0.50 + cosmos.gov.v1.MsgCancelProposalResponse: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + canceled_time: + type: string + description: canceled_time is the time when proposal is canceled. + format: date-time + canceled_height: + type: string + description: canceled_height defines the block height at which the proposal + is canceled. + format: uint64 + description: |- + MsgCancelProposalResponse defines the response structure for executing a + MsgCancelProposal message. + + Since: cosmos-sdk 0.50 + cosmos.gov.v1.MsgDeposit: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + depositor: + type: string + description: depositor defines the deposit addresses from the proposals. + amount: + type: array + description: amount to be deposited by depositor. + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: MsgDeposit defines a message to submit a deposit to an existing + proposal. + cosmos.gov.v1.MsgDepositResponse: + type: object + description: MsgDepositResponse defines the Msg/Deposit response type. + cosmos.gov.v1.MsgExecLegacyContent: + type: object + properties: + content: + type: object + additionalProperties: + type: object + description: content is the proposal's content. + authority: + type: string + description: authority must be the gov module address. + description: |- + MsgExecLegacyContent is used to wrap the legacy content field into a message. + This ensures backwards compatibility with v1beta1.MsgSubmitProposal. + cosmos.gov.v1.MsgExecLegacyContentResponse: + type: object + description: MsgExecLegacyContentResponse defines the Msg/ExecLegacyContent + response type. + cosmos.gov.v1.MsgSubmitProposal: + type: object + properties: + messages: + type: array + description: messages are the arbitrary messages to be executed if proposal + passes. + items: + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + initial_deposit: + type: array + description: initial_deposit is the deposit value that must be paid at proposal + submission. + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + proposer: + type: string + description: proposer is the account address of the proposer. + metadata: + type: string + description: metadata is any arbitrary metadata attached to the proposal. + title: + type: string + description: |- + title is the title of the proposal. + + Since: cosmos-sdk 0.47 + summary: + title: summary is the summary of the proposal + type: string + description: "Since: cosmos-sdk 0.47" + expedited: + title: expedited defines if the proposal is expedited or not + type: boolean + description: "Since: cosmos-sdk 0.50" + description: |- + MsgSubmitProposal defines an sdk.Msg type that supports submitting arbitrary + proposal Content. + cosmos.gov.v1.MsgSubmitProposalResponse: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + description: MsgSubmitProposalResponse defines the Msg/SubmitProposal response + type. + cosmos.gov.v1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: + min_deposit: + type: array + description: Minimum deposit for a proposal to enter voting period. + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + max_deposit_period: + type: string + description: |- + Maximum period for Atom holders to deposit on a proposal. Initial value: 2 + months. + voting_period: + type: string + description: Duration of the voting period. + quorum: + type: string + description: |- + Minimum percentage of total stake needed to vote for a result to be + considered valid. + threshold: + type: string + description: "Minimum proportion of Yes votes for proposal to pass.\ + \ Default value: 0.5." + veto_threshold: + type: string + description: |- + Minimum value of Veto votes to Total votes ratio for proposal to be + vetoed. Default value: 1/3. + min_initial_deposit_ratio: + type: string + description: The ratio representing the proportion of the deposit value + that must be paid at proposal submission. + proposal_cancel_ratio: + type: string + description: |- + The cancel ratio which will not be returned back to the depositors when a proposal is cancelled. + + Since: cosmos-sdk 0.50 + proposal_cancel_dest: + type: string + description: |- + The address which will receive (proposal_cancel_ratio * deposit) proposal deposits. + If empty, the (proposal_cancel_ratio * deposit) proposal deposits will be burned. + + Since: cosmos-sdk 0.50 + expedited_voting_period: + type: string + description: |- + Duration of the voting period of an expedited proposal. + + Since: cosmos-sdk 0.50 + expedited_threshold: + type: string + description: |- + Minimum proportion of Yes votes for proposal to pass. Default value: 0.67. + + Since: cosmos-sdk 0.50 + expedited_min_deposit: + type: array + description: Minimum expedited deposit for a proposal to enter voting + period. + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + burn_vote_quorum: + title: burn deposits if a proposal does not meet quorum + type: boolean + burn_proposal_deposit_prevote: + title: burn deposits if the proposal does not enter voting period + type: boolean + burn_vote_veto: + title: burn deposits if quorum with vote type no_veto is met + type: boolean + min_deposit_ratio: + type: string + description: |- + The ratio representing the proportion of the deposit value minimum that must be met when making a deposit. + Default value: 0.01. Meaning that for a chain with a min_deposit of 100stake, a deposit of 1stake would be + required. + + Since: cosmos-sdk 0.50 + description: |- + params defines the x/gov parameters to update. + + NOTE: All parameters must be supplied. + description: |- + MsgUpdateParams is the Msg/UpdateParams request type. + + Since: cosmos-sdk 0.47 + cosmos.gov.v1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + + Since: cosmos-sdk 0.47 + cosmos.gov.v1.MsgVote: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + voter: + type: string + description: voter is the voter address for the proposal. + option: + type: string + description: option defines the vote option. + default: VOTE_OPTION_UNSPECIFIED + enum: + - VOTE_OPTION_UNSPECIFIED + - VOTE_OPTION_YES + - VOTE_OPTION_ABSTAIN + - VOTE_OPTION_NO + - VOTE_OPTION_NO_WITH_VETO + metadata: + type: string + description: metadata is any arbitrary metadata attached to the Vote. + description: MsgVote defines a message to cast a vote. + cosmos.gov.v1.MsgVoteResponse: + type: object + description: MsgVoteResponse defines the Msg/Vote response type. + cosmos.gov.v1.MsgVoteWeighted: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + voter: + type: string + description: voter is the voter address for the proposal. + options: + type: array + description: options defines the weighted vote options. + items: + type: object + properties: + option: + type: string + description: "option defines the valid vote options, it must not contain\ + \ duplicate vote options." + default: VOTE_OPTION_UNSPECIFIED + enum: + - VOTE_OPTION_UNSPECIFIED + - VOTE_OPTION_YES + - VOTE_OPTION_ABSTAIN + - VOTE_OPTION_NO + - VOTE_OPTION_NO_WITH_VETO + weight: + type: string + description: weight is the vote weight associated with the vote option. + description: WeightedVoteOption defines a unit of vote for vote split. + metadata: + type: string + description: metadata is any arbitrary metadata attached to the VoteWeighted. + description: MsgVoteWeighted defines a message to cast a vote. + cosmos.gov.v1.MsgVoteWeightedResponse: + type: object + description: MsgVoteWeightedResponse defines the Msg/VoteWeighted response type. + cosmos.gov.v1.Params: + type: object + properties: + min_deposit: + type: array + description: Minimum deposit for a proposal to enter voting period. + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + max_deposit_period: + type: string + description: |- + Maximum period for Atom holders to deposit on a proposal. Initial value: 2 + months. + voting_period: + type: string + description: Duration of the voting period. + quorum: + type: string + description: |- + Minimum percentage of total stake needed to vote for a result to be + considered valid. + threshold: + type: string + description: "Minimum proportion of Yes votes for proposal to pass. Default\ + \ value: 0.5." + veto_threshold: + type: string + description: |- + Minimum value of Veto votes to Total votes ratio for proposal to be + vetoed. Default value: 1/3. + min_initial_deposit_ratio: + type: string + description: The ratio representing the proportion of the deposit value + that must be paid at proposal submission. + proposal_cancel_ratio: + type: string + description: |- + The cancel ratio which will not be returned back to the depositors when a proposal is cancelled. + + Since: cosmos-sdk 0.50 + proposal_cancel_dest: + type: string + description: |- + The address which will receive (proposal_cancel_ratio * deposit) proposal deposits. + If empty, the (proposal_cancel_ratio * deposit) proposal deposits will be burned. + + Since: cosmos-sdk 0.50 + expedited_voting_period: + type: string + description: |- + Duration of the voting period of an expedited proposal. + + Since: cosmos-sdk 0.50 + expedited_threshold: + type: string + description: |- + Minimum proportion of Yes votes for proposal to pass. Default value: 0.67. + + Since: cosmos-sdk 0.50 + expedited_min_deposit: + type: array + description: Minimum expedited deposit for a proposal to enter voting period. + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + burn_vote_quorum: + title: burn deposits if a proposal does not meet quorum + type: boolean + burn_proposal_deposit_prevote: + title: burn deposits if the proposal does not enter voting period + type: boolean + burn_vote_veto: + title: burn deposits if quorum with vote type no_veto is met + type: boolean + min_deposit_ratio: + type: string + description: |- + The ratio representing the proportion of the deposit value minimum that must be met when making a deposit. + Default value: 0.01. Meaning that for a chain with a min_deposit of 100stake, a deposit of 1stake would be + required. + + Since: cosmos-sdk 0.50 + description: |- + Params defines the parameters for the x/gov module. + + Since: cosmos-sdk 0.47 + cosmos.gov.v1.VoteOption: + type: string + description: |- + VoteOption enumerates the valid vote options for a given governance proposal. + + - VOTE_OPTION_UNSPECIFIED: VOTE_OPTION_UNSPECIFIED defines a no-op vote option. + - VOTE_OPTION_YES: VOTE_OPTION_YES defines a yes vote option. + - VOTE_OPTION_ABSTAIN: VOTE_OPTION_ABSTAIN defines an abstain vote option. + - VOTE_OPTION_NO: VOTE_OPTION_NO defines a no vote option. + - VOTE_OPTION_NO_WITH_VETO: VOTE_OPTION_NO_WITH_VETO defines a no with veto vote option. + default: VOTE_OPTION_UNSPECIFIED + enum: + - VOTE_OPTION_UNSPECIFIED + - VOTE_OPTION_YES + - VOTE_OPTION_ABSTAIN + - VOTE_OPTION_NO + - VOTE_OPTION_NO_WITH_VETO + cosmos.gov.v1.WeightedVoteOption: + type: object + properties: + option: + type: string + description: "option defines the valid vote options, it must not contain\ + \ duplicate vote options." + default: VOTE_OPTION_UNSPECIFIED + enum: + - VOTE_OPTION_UNSPECIFIED + - VOTE_OPTION_YES + - VOTE_OPTION_ABSTAIN + - VOTE_OPTION_NO + - VOTE_OPTION_NO_WITH_VETO + weight: + type: string + description: weight is the vote weight associated with the vote option. + description: WeightedVoteOption defines a unit of vote for vote split. + cosmos.gov.v1beta1.MsgDeposit: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + depositor: + type: string + description: depositor defines the deposit addresses from the proposals. + amount: + type: array + description: amount to be deposited by depositor. + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: MsgDeposit defines a message to submit a deposit to an existing + proposal. + cosmos.gov.v1beta1.MsgDepositResponse: + type: object + description: MsgDepositResponse defines the Msg/Deposit response type. + cosmos.gov.v1beta1.MsgSubmitProposal: + type: object + properties: + content: + type: object + additionalProperties: + type: object + description: content is the proposal's content. + initial_deposit: + type: array + description: initial_deposit is the deposit value that must be paid at proposal + submission. + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + proposer: + type: string + description: proposer is the account address of the proposer. + description: |- + MsgSubmitProposal defines an sdk.Msg type that supports submitting arbitrary + proposal Content. + cosmos.gov.v1beta1.MsgSubmitProposalResponse: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + description: MsgSubmitProposalResponse defines the Msg/SubmitProposal response + type. + cosmos.gov.v1beta1.MsgVote: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + voter: + type: string + description: voter is the voter address for the proposal. + option: + type: string + description: option defines the vote option. + default: VOTE_OPTION_UNSPECIFIED + enum: + - VOTE_OPTION_UNSPECIFIED + - VOTE_OPTION_YES + - VOTE_OPTION_ABSTAIN + - VOTE_OPTION_NO + - VOTE_OPTION_NO_WITH_VETO + description: MsgVote defines a message to cast a vote. + cosmos.gov.v1beta1.MsgVoteResponse: + type: object + description: MsgVoteResponse defines the Msg/Vote response type. + cosmos.gov.v1beta1.MsgVoteWeighted: + type: object + properties: + proposal_id: + type: string + description: proposal_id defines the unique id of the proposal. + format: uint64 + voter: + type: string + description: voter is the voter address for the proposal. + options: + type: array + description: options defines the weighted vote options. + items: + type: object + properties: + option: + type: string + description: "option defines the valid vote options, it must not contain\ + \ duplicate vote options." + default: VOTE_OPTION_UNSPECIFIED + enum: + - VOTE_OPTION_UNSPECIFIED + - VOTE_OPTION_YES + - VOTE_OPTION_ABSTAIN + - VOTE_OPTION_NO + - VOTE_OPTION_NO_WITH_VETO + weight: + type: string + description: weight is the vote weight associated with the vote option. + description: |- + WeightedVoteOption defines a unit of vote for vote split. + + Since: cosmos-sdk 0.43 + description: |- + MsgVoteWeighted defines a message to cast a vote. + + Since: cosmos-sdk 0.43 + cosmos.gov.v1beta1.MsgVoteWeightedResponse: + type: object + description: |- + MsgVoteWeightedResponse defines the Msg/VoteWeighted response type. + + Since: cosmos-sdk 0.43 + cosmos.gov.v1beta1.VoteOption: + type: string + description: |- + VoteOption enumerates the valid vote options for a given governance proposal. + + - VOTE_OPTION_UNSPECIFIED: VOTE_OPTION_UNSPECIFIED defines a no-op vote option. + - VOTE_OPTION_YES: VOTE_OPTION_YES defines a yes vote option. + - VOTE_OPTION_ABSTAIN: VOTE_OPTION_ABSTAIN defines an abstain vote option. + - VOTE_OPTION_NO: VOTE_OPTION_NO defines a no vote option. + - VOTE_OPTION_NO_WITH_VETO: VOTE_OPTION_NO_WITH_VETO defines a no with veto vote option. + default: VOTE_OPTION_UNSPECIFIED + enum: + - VOTE_OPTION_UNSPECIFIED + - VOTE_OPTION_YES + - VOTE_OPTION_ABSTAIN + - VOTE_OPTION_NO + - VOTE_OPTION_NO_WITH_VETO + cosmos.gov.v1beta1.WeightedVoteOption: + type: object + properties: + option: + type: string + description: "option defines the valid vote options, it must not contain\ + \ duplicate vote options." + default: VOTE_OPTION_UNSPECIFIED + enum: + - VOTE_OPTION_UNSPECIFIED + - VOTE_OPTION_YES + - VOTE_OPTION_ABSTAIN + - VOTE_OPTION_NO + - VOTE_OPTION_NO_WITH_VETO + weight: + type: string + description: weight is the vote weight associated with the vote option. + description: |- + WeightedVoteOption defines a unit of vote for vote split. + + Since: cosmos-sdk 0.43 + cosmos.mint.v1beta1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: + mint_denom: + title: type of coin to mint + type: string + inflation_rate_change: + title: maximum annual change in inflation rate + type: string + inflation_max: + title: maximum inflation rate + type: string + inflation_min: + title: minimum inflation rate + type: string + goal_bonded: + title: goal of percent bonded atoms + type: string + blocks_per_year: + title: expected blocks per year + type: string + format: uint64 + description: |- + params defines the x/mint parameters to update. + + NOTE: All parameters must be supplied. + description: |- + MsgUpdateParams is the Msg/UpdateParams request type. + + Since: cosmos-sdk 0.47 + cosmos.mint.v1beta1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + + Since: cosmos-sdk 0.47 + cosmos.mint.v1beta1.Params: + type: object + properties: + mint_denom: + title: type of coin to mint + type: string + inflation_rate_change: + title: maximum annual change in inflation rate + type: string + inflation_max: + title: maximum inflation rate + type: string + inflation_min: + title: minimum inflation rate + type: string + goal_bonded: + title: goal of percent bonded atoms + type: string + blocks_per_year: + title: expected blocks per year + type: string + format: uint64 + description: Params defines the parameters for the x/mint module. + cosmos.params.v1beta1.ParamChange: + type: object + properties: + subspace: + type: string + key: + type: string + value: + type: string + description: |- + ParamChange defines an individual parameter change, for use in + ParameterChangeProposal. + cosmos.params.v1beta1.QueryParamsResponse: + type: object + properties: + param: + type: object + properties: + subspace: + type: string + key: + type: string + value: + type: string + description: param defines the queried parameter. + description: QueryParamsResponse is response type for the Query/Params RPC method. + cosmos.params.v1beta1.QuerySubspacesResponse: + type: object + properties: + subspaces: + type: array + items: + type: object + properties: + subspace: + type: string + keys: + type: array + items: + type: string + description: |- + Subspace defines a parameter subspace name and all the keys that exist for + the subspace. + + Since: cosmos-sdk 0.46 + description: |- + QuerySubspacesResponse defines the response types for querying for all + registered subspaces and all keys for a subspace. + + Since: cosmos-sdk 0.46 + cosmos.params.v1beta1.Subspace: + type: object + properties: + subspace: + type: string + keys: + type: array + items: + type: string + description: |- + Subspace defines a parameter subspace name and all the keys that exist for + the subspace. + + Since: cosmos-sdk 0.46 + cosmos.slashing.v1beta1.MsgUnjail: + title: MsgUnjail defines the Msg/Unjail request type + type: object + properties: + validator_addr: + type: string + cosmos.slashing.v1beta1.MsgUnjailResponse: + title: MsgUnjailResponse defines the Msg/Unjail response type + type: object + cosmos.slashing.v1beta1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: + signed_blocks_window: + type: string + format: int64 + min_signed_per_window: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + downtime_jail_duration: + type: string + slash_fraction_double_sign: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + slash_fraction_downtime: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: |- + params defines the x/slashing parameters to update. + + NOTE: All parameters must be supplied. + description: |- + MsgUpdateParams is the Msg/UpdateParams request type. + + Since: cosmos-sdk 0.47 + cosmos.slashing.v1beta1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + + Since: cosmos-sdk 0.47 + cosmos.slashing.v1beta1.Params: + type: object + properties: + signed_blocks_window: + type: string + format: int64 + min_signed_per_window: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + downtime_jail_duration: + type: string + slash_fraction_double_sign: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + slash_fraction_downtime: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: Params represents the parameters used for by the slashing module. + cosmos.staking.v1beta1.CommissionRates: + type: object + properties: + rate: + type: string + description: "rate is the commission rate charged to delegators, as a fraction." + max_rate: + type: string + description: "max_rate defines the maximum commission rate which validator\ + \ can ever charge, as a fraction." + max_change_rate: + type: string + description: "max_change_rate defines the maximum daily increase of the\ + \ validator commission, as a fraction." + description: |- + CommissionRates defines the initial commission rates to be used for creating + a validator. + cosmos.staking.v1beta1.Description: + type: object + properties: + moniker: + type: string + description: moniker defines a human-readable name for the validator. + identity: + type: string + description: identity defines an optional identity signature (ex. UPort + or Keybase). + website: + type: string + description: website defines an optional website link. + security_contact: + type: string + description: security_contact defines an optional email for security contact. + details: + type: string + description: details define other optional details. + description: Description defines a validator description. + cosmos.staking.v1beta1.MsgBeginRedelegate: + type: object + properties: + delegator_address: + type: string + validator_src_address: + type: string + validator_dst_address: + type: string + amount: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: |- + MsgBeginRedelegate defines a SDK message for performing a redelegation + of coins from a delegator and source validator to a destination validator. + cosmos.staking.v1beta1.MsgBeginRedelegateResponse: + type: object + properties: + completion_time: + type: string + format: date-time + description: MsgBeginRedelegateResponse defines the Msg/BeginRedelegate response + type. + cosmos.staking.v1beta1.MsgCancelUnbondingDelegation: + title: MsgCancelUnbondingDelegation defines the SDK message for performing a + cancel unbonding delegation for delegator + type: object + properties: + delegator_address: + type: string + validator_address: + type: string + amount: + title: amount is always less than or equal to unbonding delegation entry + balance + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + creation_height: + type: string + description: creation_height is the height which the unbonding took place. + format: int64 + description: "Since: cosmos-sdk 0.46" + cosmos.staking.v1beta1.MsgCancelUnbondingDelegationResponse: + title: MsgCancelUnbondingDelegationResponse + type: object + description: "Since: cosmos-sdk 0.46" + cosmos.staking.v1beta1.MsgCreateValidator: + type: object + properties: + description: + type: object + properties: + moniker: + type: string + description: moniker defines a human-readable name for the validator. + identity: + type: string + description: identity defines an optional identity signature (ex. UPort + or Keybase). + website: + type: string + description: website defines an optional website link. + security_contact: + type: string + description: security_contact defines an optional email for security + contact. + details: + type: string + description: details define other optional details. + description: Description defines a validator description. + commission: + type: object + properties: + rate: + type: string + description: "rate is the commission rate charged to delegators, as\ + \ a fraction." + max_rate: + type: string + description: "max_rate defines the maximum commission rate which validator\ + \ can ever charge, as a fraction." + max_change_rate: + type: string + description: "max_change_rate defines the maximum daily increase of\ + \ the validator commission, as a fraction." + description: |- + CommissionRates defines the initial commission rates to be used for creating + a validator. + min_self_delegation: + type: string + delegator_address: + type: string + description: |- + Deprecated: Use of Delegator Address in MsgCreateValidator is deprecated. + The validator address bytes and delegator address bytes refer to the same account while creating validator (defer + only in bech32 notation). + validator_address: + type: string + pubkey: + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + value: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: MsgCreateValidator defines a SDK message for creating a new validator. + cosmos.staking.v1beta1.MsgCreateValidatorResponse: + type: object + description: MsgCreateValidatorResponse defines the Msg/CreateValidator response + type. + cosmos.staking.v1beta1.MsgDelegate: + type: object + properties: + delegator_address: + type: string + validator_address: + type: string + amount: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: |- + MsgDelegate defines a SDK message for performing a delegation of coins + from a delegator to a validator. + cosmos.staking.v1beta1.MsgDelegateResponse: + type: object + description: MsgDelegateResponse defines the Msg/Delegate response type. + cosmos.staking.v1beta1.MsgEditValidator: + type: object + properties: + description: + type: object + properties: + moniker: + type: string + description: moniker defines a human-readable name for the validator. + identity: + type: string + description: identity defines an optional identity signature (ex. UPort + or Keybase). + website: + type: string + description: website defines an optional website link. + security_contact: + type: string + description: security_contact defines an optional email for security + contact. + details: + type: string + description: details define other optional details. + description: Description defines a validator description. + validator_address: + type: string + commission_rate: + title: |- + We pass a reference to the new commission rate and min self delegation as + it's not mandatory to update. If not updated, the deserialized rate will be + zero with no way to distinguish if an update was intended. + REF: #2373 + type: string + min_self_delegation: + type: string + description: MsgEditValidator defines a SDK message for editing an existing + validator. + cosmos.staking.v1beta1.MsgEditValidatorResponse: + type: object + description: MsgEditValidatorResponse defines the Msg/EditValidator response + type. + cosmos.staking.v1beta1.MsgUndelegate: + type: object + properties: + delegator_address: + type: string + validator_address: + type: string + amount: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: |- + MsgUndelegate defines a SDK message for performing an undelegation from a + delegate and a validator. + cosmos.staking.v1beta1.MsgUndelegateResponse: + type: object + properties: + completion_time: + type: string + format: date-time + amount: + title: amount returns the amount of undelegated coins + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + description: MsgUndelegateResponse defines the Msg/Undelegate response type. + cosmos.staking.v1beta1.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: + unbonding_time: + type: string + description: unbonding_time is the time duration of unbonding. + max_validators: + type: integer + description: max_validators is the maximum number of validators. + format: int64 + max_entries: + type: integer + description: max_entries is the max entries for either unbonding delegation + or redelegation (per pair/trio). + format: int64 + historical_entries: + type: integer + description: historical_entries is the number of historical entries + to persist. + format: int64 + bond_denom: + type: string + description: bond_denom defines the bondable coin denomination. + min_commission_rate: + title: min_commission_rate is the chain-wide minimum commission rate + that a validator can charge their delegators + type: string + description: |- + params defines the x/staking parameters to update. + + NOTE: All parameters must be supplied. + description: |- + MsgUpdateParams is the Msg/UpdateParams request type. + + Since: cosmos-sdk 0.47 + cosmos.staking.v1beta1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + + Since: cosmos-sdk 0.47 + cosmos.staking.v1beta1.Params: + type: object + properties: + unbonding_time: + type: string + description: unbonding_time is the time duration of unbonding. + max_validators: + type: integer + description: max_validators is the maximum number of validators. + format: int64 + max_entries: + type: integer + description: max_entries is the max entries for either unbonding delegation + or redelegation (per pair/trio). + format: int64 + historical_entries: + type: integer + description: historical_entries is the number of historical entries to persist. + format: int64 + bond_denom: + type: string + description: bond_denom defines the bondable coin denomination. + min_commission_rate: + title: min_commission_rate is the chain-wide minimum commission rate that + a validator can charge their delegators + type: string + description: Params defines the parameters for the x/staking module. + ibc.applications.fee.v1.Fee: + title: "Fee defines the ICS29 receive, acknowledgement and timeout fees" + type: object + properties: + recv_fee: + title: the packet receive fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + ack_fee: + title: the packet acknowledgement fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + timeout_fee: + title: the packet timeout fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + ibc.applications.fee.v1.MsgPayPacketFee: + title: |- + MsgPayPacketFee defines the request type for the PayPacketFee rpc + This Msg can be used to pay for a packet at the next sequence send & should be combined with the Msg that will be + paid for + type: object + properties: + fee: + title: "fee encapsulates the recv, ack and timeout fees associated with\ + \ an IBC packet" + type: object + properties: + recv_fee: + title: the packet receive fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + ack_fee: + title: the packet acknowledgement fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + timeout_fee: + title: the packet timeout fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + source_port_id: + title: the source port unique identifier + type: string + source_channel_id: + title: the source channel unique identifer + type: string + signer: + title: account address to refund fee if necessary + type: string + relayers: + title: optional list of relayers permitted to the receive packet fees + type: array + items: + type: string + ibc.applications.fee.v1.MsgPayPacketFeeAsync: + title: |- + MsgPayPacketFeeAsync defines the request type for the PayPacketFeeAsync rpc + This Msg can be used to pay for a packet at a specified sequence (instead of the next sequence send) + type: object + properties: + packet_id: + title: "unique packet identifier comprised of the channel ID, port ID and\ + \ sequence" + type: object + properties: + port_id: + title: channel port identifier + type: string + channel_id: + title: channel unique identifier + type: string + sequence: + title: packet sequence + type: string + format: uint64 + packet_fee: + title: the packet fee associated with a particular IBC packet + type: object + properties: + fee: + title: "fee encapsulates the recv, ack and timeout fees associated with\ + \ an IBC packet" + type: object + properties: + recv_fee: + title: the packet receive fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + ack_fee: + title: the packet acknowledgement fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + timeout_fee: + title: the packet timeout fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + refund_address: + title: the refund address for unspent fees + type: string + relayers: + title: optional list of relayers permitted to receive fees + type: array + items: + type: string + ibc.applications.fee.v1.MsgPayPacketFeeAsyncResponse: + title: MsgPayPacketFeeAsyncResponse defines the response type for the PayPacketFeeAsync + rpc + type: object + ibc.applications.fee.v1.MsgPayPacketFeeResponse: + title: MsgPayPacketFeeResponse defines the response type for the PayPacketFee + rpc + type: object + ibc.applications.fee.v1.MsgRegisterCounterpartyPayee: + title: MsgRegisterCounterpartyPayee defines the request type for the RegisterCounterpartyPayee + rpc + type: object + properties: + port_id: + title: unique port identifier + type: string + channel_id: + title: unique channel identifier + type: string + relayer: + title: the relayer address + type: string + counterparty_payee: + title: the counterparty payee address + type: string + ibc.applications.fee.v1.MsgRegisterCounterpartyPayeeResponse: + title: MsgRegisterCounterpartyPayeeResponse defines the response type for the + RegisterCounterpartyPayee rpc + type: object + ibc.applications.fee.v1.MsgRegisterPayee: + title: MsgRegisterPayee defines the request type for the RegisterPayee rpc + type: object + properties: + port_id: + title: unique port identifier + type: string + channel_id: + title: unique channel identifier + type: string + relayer: + title: the relayer address + type: string + payee: + title: the payee address + type: string + ibc.applications.fee.v1.MsgRegisterPayeeResponse: + title: MsgRegisterPayeeResponse defines the response type for the RegisterPayee + rpc + type: object + ibc.applications.fee.v1.PacketFee: + title: "PacketFee contains ICS29 relayer fees, refund address and optional list\ + \ of permitted relayers" + type: object + properties: + fee: + title: "fee encapsulates the recv, ack and timeout fees associated with\ + \ an IBC packet" + type: object + properties: + recv_fee: + title: the packet receive fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + ack_fee: + title: the packet acknowledgement fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + timeout_fee: + title: the packet timeout fee + type: array + items: + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + refund_address: + title: the refund address for unspent fees + type: string + relayers: + title: optional list of relayers permitted to receive fees + type: array + items: + type: string + ibc.core.channel.v1.PacketId: + title: |- + PacketId is an identifer for a unique Packet + Source chains refer to packets by source port/channel + Destination chains refer to packets by destination port/channel + type: object + properties: + port_id: + title: channel port identifier + type: string + channel_id: + title: channel unique identifier + type: string + sequence: + title: packet sequence + type: string + format: uint64 + ibc.applications.interchain_accounts.controller.v1.MsgRegisterInterchainAccount: + title: MsgRegisterInterchainAccount defines the payload for Msg/RegisterAccount + type: object + properties: + owner: + type: string + connection_id: + type: string + version: + type: string + ibc.applications.interchain_accounts.controller.v1.MsgRegisterInterchainAccountResponse: + title: MsgRegisterInterchainAccountResponse defines the response for Msg/RegisterAccount + type: object + properties: + channel_id: + type: string + port_id: + type: string + ibc.applications.interchain_accounts.controller.v1.MsgSendTx: + title: MsgSendTx defines the payload for Msg/SendTx + type: object + properties: + owner: + type: string + connection_id: + type: string + packet_data: + type: object + properties: + type: + title: |- + Type defines a classification of message issued from a controller chain to its associated interchain accounts + host + type: string + description: |- + - TYPE_UNSPECIFIED: Default zero value enumeration + - TYPE_EXECUTE_TX: Execute a transaction on an interchain accounts host chain + default: TYPE_UNSPECIFIED + enum: + - TYPE_UNSPECIFIED + - TYPE_EXECUTE_TX + data: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + memo: + type: string + description: "InterchainAccountPacketData is comprised of a raw transaction,\ + \ type of transaction and optional memo field." + relative_timeout: + type: string + description: |- + Relative timeout timestamp provided will be added to the current block time during transaction execution. + The timeout timestamp must be non-zero. + format: uint64 + ibc.applications.interchain_accounts.controller.v1.MsgSendTxResponse: + title: MsgSendTxResponse defines the response for MsgSendTx + type: object + properties: + sequence: + type: string + format: uint64 + ibc.applications.interchain_accounts.controller.v1.MsgUpdateParams: + title: MsgUpdateParams defines the payload for Msg/UpdateParams + type: object + properties: + signer: + title: signer address + type: string + params: + type: object + properties: + controller_enabled: + type: boolean + description: controller_enabled enables or disables the controller submodule. + description: |- + params defines the 27-interchain-accounts/controller parameters to update. + + NOTE: All parameters must be supplied. + ibc.applications.interchain_accounts.controller.v1.MsgUpdateParamsResponse: + title: MsgUpdateParamsResponse defines the response for Msg/UpdateParams + type: object + ibc.applications.interchain_accounts.controller.v1.Params: + type: object + properties: + controller_enabled: + type: boolean + description: controller_enabled enables or disables the controller submodule. + description: |- + Params defines the set of on-chain interchain accounts parameters. + The following parameters may be used to disable the controller submodule. + ibc.applications.interchain_accounts.v1.InterchainAccountPacketData: + type: object + properties: + type: + title: |- + Type defines a classification of message issued from a controller chain to its associated interchain accounts + host + type: string + description: |- + - TYPE_UNSPECIFIED: Default zero value enumeration + - TYPE_EXECUTE_TX: Execute a transaction on an interchain accounts host chain + default: TYPE_UNSPECIFIED + enum: + - TYPE_UNSPECIFIED + - TYPE_EXECUTE_TX + data: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + memo: + type: string + description: "InterchainAccountPacketData is comprised of a raw transaction,\ + \ type of transaction and optional memo field." + ibc.applications.interchain_accounts.v1.Type: + title: |- + Type defines a classification of message issued from a controller chain to its associated interchain accounts + host + type: string + description: |- + - TYPE_UNSPECIFIED: Default zero value enumeration + - TYPE_EXECUTE_TX: Execute a transaction on an interchain accounts host chain + default: TYPE_UNSPECIFIED + enum: + - TYPE_UNSPECIFIED + - TYPE_EXECUTE_TX + ibc.applications.interchain_accounts.host.v1.MsgUpdateParams: + title: MsgUpdateParams defines the payload for Msg/UpdateParams + type: object + properties: + signer: + title: signer address + type: string + params: + type: object + properties: + host_enabled: + type: boolean + description: host_enabled enables or disables the host submodule. + allow_messages: + type: array + description: allow_messages defines a list of sdk message typeURLs allowed + to be executed on a host chain. + items: + type: string + description: |- + params defines the 27-interchain-accounts/host parameters to update. + + NOTE: All parameters must be supplied. + ibc.applications.interchain_accounts.host.v1.MsgUpdateParamsResponse: + title: MsgUpdateParamsResponse defines the response for Msg/UpdateParams + type: object + ibc.applications.interchain_accounts.host.v1.Params: + type: object + properties: + host_enabled: + type: boolean + description: host_enabled enables or disables the host submodule. + allow_messages: + type: array + description: allow_messages defines a list of sdk message typeURLs allowed + to be executed on a host chain. + items: + type: string + description: |- + Params defines the set of on-chain interchain accounts parameters. + The following parameters may be used to disable the host submodule. + ibc.applications.transfer.v1.MsgTransfer: + title: |- + MsgTransfer defines a msg to transfer fungible tokens (i.e Coins) between + ICS20 enabled chains. See ICS Spec here: + https://github.com/cosmos/ibc/tree/master/spec/app/ics-020-fungible-token-transfer#data-structures + type: object + properties: + source_port: + title: the port on which the packet will be sent + type: string + source_channel: + title: the channel by which the packet will be sent + type: string + token: + title: the tokens to be transferred + type: object + properties: + denom: + type: string + amount: + type: string + description: |- + Coin defines a token with a denomination and an amount. + + NOTE: The amount field is an Int which implements the custom method + signatures required by gogoproto. + sender: + title: the sender address + type: string + receiver: + title: the recipient address on the destination chain + type: string + timeout_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Timeout height relative to the current block height. + The timeout is disabled when set to 0. + timeout_timestamp: + type: string + description: |- + Timeout timestamp in absolute nanoseconds since unix epoch. + The timeout is disabled when set to 0. + format: uint64 + memo: + title: optional memo + type: string + ibc.applications.transfer.v1.MsgTransferResponse: + type: object + properties: + sequence: + title: sequence number of the transfer packet sent + type: string + format: uint64 + description: MsgTransferResponse defines the Msg/Transfer response type. + ibc.applications.transfer.v1.MsgUpdateParams: + type: object + properties: + signer: + title: signer address + type: string + params: + type: object + properties: + send_enabled: + type: boolean + description: |- + send_enabled enables or disables all cross-chain token transfers from this + chain. + receive_enabled: + type: boolean + description: |- + receive_enabled enables or disables all cross-chain token transfers to this + chain. + description: |- + params defines the transfer parameters to update. + + NOTE: All parameters must be supplied. + description: MsgUpdateParams is the Msg/UpdateParams request type. + ibc.applications.transfer.v1.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + ibc.applications.transfer.v1.Params: + type: object + properties: + send_enabled: + type: boolean + description: |- + send_enabled enables or disables all cross-chain token transfers from this + chain. + receive_enabled: + type: boolean + description: |- + receive_enabled enables or disables all cross-chain token transfers to this + chain. + description: |- + Params defines the set of IBC transfer parameters. + NOTE: To prevent a single token from being transferred, set the + TransfersEnabled parameter to true and then set the bank module's SendEnabled + parameter for the denomination to false. + ibc.core.client.v1.Height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + ibc.core.channel.v1.Channel: + type: object + properties: + state: + title: current state of the channel end + type: string + description: |- + State defines if a channel is in one of the following states: + CLOSED, INIT, TRYOPEN, OPEN or UNINITIALIZED. + + - STATE_UNINITIALIZED_UNSPECIFIED: Default State + - STATE_INIT: A channel has just started the opening handshake. + - STATE_TRYOPEN: A channel has acknowledged the handshake step on the counterparty chain. + - STATE_OPEN: A channel has completed the handshake. Open channels are + ready to send and receive packets. + - STATE_CLOSED: A channel has been closed and can no longer be used to send or receive + packets. + default: STATE_UNINITIALIZED_UNSPECIFIED + enum: + - STATE_UNINITIALIZED_UNSPECIFIED + - STATE_INIT + - STATE_TRYOPEN + - STATE_OPEN + - STATE_CLOSED + ordering: + title: whether the channel is ordered or unordered + type: string + description: |- + - ORDER_NONE_UNSPECIFIED: zero-value for channel ordering + - ORDER_UNORDERED: packets can be delivered in any order, which may differ from the order in + which they were sent. + - ORDER_ORDERED: packets are delivered exactly in the order which they were sent + default: ORDER_NONE_UNSPECIFIED + enum: + - ORDER_NONE_UNSPECIFIED + - ORDER_UNORDERED + - ORDER_ORDERED + counterparty: + title: counterparty channel end + type: object + properties: + port_id: + type: string + description: port on the counterparty chain which owns the other end + of the channel. + channel_id: + title: channel end on the counterparty chain + type: string + connection_hops: + title: |- + list of connection identifiers, in order, along which packets sent on + this channel will travel + type: array + items: + type: string + version: + title: "opaque channel version, which is agreed upon during the handshake" + type: string + description: |- + Channel defines pipeline for exactly-once packet delivery between specific + modules on separate blockchains, which has at least one end capable of + sending packets and one end capable of receiving packets. + ibc.core.channel.v1.Counterparty: + title: Counterparty defines a channel end counterparty + type: object + properties: + port_id: + type: string + description: port on the counterparty chain which owns the other end of + the channel. + channel_id: + title: channel end on the counterparty chain + type: string + ibc.core.channel.v1.MsgAcknowledgement: + title: MsgAcknowledgement receives incoming IBC acknowledgement + type: object + properties: + packet: + title: Packet defines a type that carries data across different chains through + IBC + type: object + properties: + sequence: + type: string + description: |- + number corresponds to the order of sends and receives, where a Packet + with an earlier sequence number must be sent and received before a Packet + with a later sequence number. + format: uint64 + source_port: + type: string + description: identifies the port on the sending chain. + source_channel: + type: string + description: identifies the channel end on the sending chain. + destination_port: + type: string + description: identifies the port on the receiving chain. + destination_channel: + type: string + description: identifies the channel end on the receiving chain. + data: + title: actual opaque bytes transferred directly to the application module + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + timeout_height: + title: block height after which the packet times out + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + timeout_timestamp: + title: block timestamp (in nanoseconds) after which the packet times + out + type: string + format: uint64 + acknowledgement: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_acked: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + ibc.core.channel.v1.MsgAcknowledgementResponse: + type: object + properties: + result: + title: ResponseResultType defines the possible outcomes of the execution + of a message + type: string + description: |- + - RESPONSE_RESULT_TYPE_UNSPECIFIED: Default zero value enumeration + - RESPONSE_RESULT_TYPE_NOOP: The message did not call the IBC application callbacks (because, for example, the packet had already been relayed) + - RESPONSE_RESULT_TYPE_SUCCESS: The message was executed successfully + default: RESPONSE_RESULT_TYPE_UNSPECIFIED + enum: + - RESPONSE_RESULT_TYPE_UNSPECIFIED + - RESPONSE_RESULT_TYPE_NOOP + - RESPONSE_RESULT_TYPE_SUCCESS + description: MsgAcknowledgementResponse defines the Msg/Acknowledgement response + type. + ibc.core.channel.v1.MsgChannelCloseConfirm: + type: object + properties: + port_id: + type: string + channel_id: + type: string + proof_init: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + description: |- + MsgChannelCloseConfirm defines a msg sent by a Relayer to Chain B + to acknowledge the change of channel state to CLOSED on Chain A. + ibc.core.channel.v1.MsgChannelCloseConfirmResponse: + type: object + description: |- + MsgChannelCloseConfirmResponse defines the Msg/ChannelCloseConfirm response + type. + ibc.core.channel.v1.MsgChannelCloseInit: + type: object + properties: + port_id: + type: string + channel_id: + type: string + signer: + type: string + description: |- + MsgChannelCloseInit defines a msg sent by a Relayer to Chain A + to close a channel with Chain B. + ibc.core.channel.v1.MsgChannelCloseInitResponse: + type: object + description: MsgChannelCloseInitResponse defines the Msg/ChannelCloseInit response + type. + ibc.core.channel.v1.MsgChannelOpenAck: + type: object + properties: + port_id: + type: string + channel_id: + type: string + counterparty_channel_id: + type: string + counterparty_version: + type: string + proof_try: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + description: |- + MsgChannelOpenAck defines a msg sent by a Relayer to Chain A to acknowledge + the change of channel state to TRYOPEN on Chain B. + ibc.core.channel.v1.MsgChannelOpenAckResponse: + type: object + description: MsgChannelOpenAckResponse defines the Msg/ChannelOpenAck response + type. + ibc.core.channel.v1.MsgChannelOpenConfirm: + type: object + properties: + port_id: + type: string + channel_id: + type: string + proof_ack: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + description: |- + MsgChannelOpenConfirm defines a msg sent by a Relayer to Chain B to + acknowledge the change of channel state to OPEN on Chain A. + ibc.core.channel.v1.MsgChannelOpenConfirmResponse: + type: object + description: |- + MsgChannelOpenConfirmResponse defines the Msg/ChannelOpenConfirm response + type. + ibc.core.channel.v1.MsgChannelOpenInit: + type: object + properties: + port_id: + type: string + channel: + type: object + properties: + state: + title: current state of the channel end + type: string + description: |- + State defines if a channel is in one of the following states: + CLOSED, INIT, TRYOPEN, OPEN or UNINITIALIZED. + + - STATE_UNINITIALIZED_UNSPECIFIED: Default State + - STATE_INIT: A channel has just started the opening handshake. + - STATE_TRYOPEN: A channel has acknowledged the handshake step on the counterparty chain. + - STATE_OPEN: A channel has completed the handshake. Open channels are + ready to send and receive packets. + - STATE_CLOSED: A channel has been closed and can no longer be used to send or receive + packets. + default: STATE_UNINITIALIZED_UNSPECIFIED + enum: + - STATE_UNINITIALIZED_UNSPECIFIED + - STATE_INIT + - STATE_TRYOPEN + - STATE_OPEN + - STATE_CLOSED + ordering: + title: whether the channel is ordered or unordered + type: string + description: |- + - ORDER_NONE_UNSPECIFIED: zero-value for channel ordering + - ORDER_UNORDERED: packets can be delivered in any order, which may differ from the order in + which they were sent. + - ORDER_ORDERED: packets are delivered exactly in the order which they were sent + default: ORDER_NONE_UNSPECIFIED + enum: + - ORDER_NONE_UNSPECIFIED + - ORDER_UNORDERED + - ORDER_ORDERED + counterparty: + title: counterparty channel end + type: object + properties: + port_id: + type: string + description: port on the counterparty chain which owns the other + end of the channel. + channel_id: + title: channel end on the counterparty chain + type: string + connection_hops: + title: |- + list of connection identifiers, in order, along which packets sent on + this channel will travel + type: array + items: + type: string + version: + title: "opaque channel version, which is agreed upon during the handshake" + type: string + description: |- + Channel defines pipeline for exactly-once packet delivery between specific + modules on separate blockchains, which has at least one end capable of + sending packets and one end capable of receiving packets. + signer: + type: string + description: |- + MsgChannelOpenInit defines an sdk.Msg to initialize a channel handshake. It + is called by a relayer on Chain A. + ibc.core.channel.v1.MsgChannelOpenInitResponse: + type: object + properties: + channel_id: + type: string + version: + type: string + description: MsgChannelOpenInitResponse defines the Msg/ChannelOpenInit response + type. + ibc.core.channel.v1.MsgChannelOpenTry: + type: object + properties: + port_id: + type: string + previous_channel_id: + type: string + description: "Deprecated: this field is unused. Crossing hello's are no\ + \ longer supported in core IBC." + channel: + type: object + properties: + state: + title: current state of the channel end + type: string + description: |- + State defines if a channel is in one of the following states: + CLOSED, INIT, TRYOPEN, OPEN or UNINITIALIZED. + + - STATE_UNINITIALIZED_UNSPECIFIED: Default State + - STATE_INIT: A channel has just started the opening handshake. + - STATE_TRYOPEN: A channel has acknowledged the handshake step on the counterparty chain. + - STATE_OPEN: A channel has completed the handshake. Open channels are + ready to send and receive packets. + - STATE_CLOSED: A channel has been closed and can no longer be used to send or receive + packets. + default: STATE_UNINITIALIZED_UNSPECIFIED + enum: + - STATE_UNINITIALIZED_UNSPECIFIED + - STATE_INIT + - STATE_TRYOPEN + - STATE_OPEN + - STATE_CLOSED + ordering: + title: whether the channel is ordered or unordered + type: string + description: |- + - ORDER_NONE_UNSPECIFIED: zero-value for channel ordering + - ORDER_UNORDERED: packets can be delivered in any order, which may differ from the order in + which they were sent. + - ORDER_ORDERED: packets are delivered exactly in the order which they were sent + default: ORDER_NONE_UNSPECIFIED + enum: + - ORDER_NONE_UNSPECIFIED + - ORDER_UNORDERED + - ORDER_ORDERED + counterparty: + title: counterparty channel end + type: object + properties: + port_id: + type: string + description: port on the counterparty chain which owns the other + end of the channel. + channel_id: + title: channel end on the counterparty chain + type: string + connection_hops: + title: |- + list of connection identifiers, in order, along which packets sent on + this channel will travel + type: array + items: + type: string + version: + title: "opaque channel version, which is agreed upon during the handshake" + type: string + description: |- + Channel defines pipeline for exactly-once packet delivery between specific + modules on separate blockchains, which has at least one end capable of + sending packets and one end capable of receiving packets. + counterparty_version: + type: string + proof_init: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + description: |- + MsgChannelOpenInit defines a msg sent by a Relayer to try to open a channel + on Chain B. The version field within the Channel field has been deprecated. Its + value will be ignored by core IBC. + ibc.core.channel.v1.MsgChannelOpenTryResponse: + type: object + properties: + version: + type: string + channel_id: + type: string + description: MsgChannelOpenTryResponse defines the Msg/ChannelOpenTry response + type. + ibc.core.channel.v1.MsgRecvPacket: + title: MsgRecvPacket receives incoming IBC packet + type: object + properties: + packet: + title: Packet defines a type that carries data across different chains through + IBC + type: object + properties: + sequence: + type: string + description: |- + number corresponds to the order of sends and receives, where a Packet + with an earlier sequence number must be sent and received before a Packet + with a later sequence number. + format: uint64 + source_port: + type: string + description: identifies the port on the sending chain. + source_channel: + type: string + description: identifies the channel end on the sending chain. + destination_port: + type: string + description: identifies the port on the receiving chain. + destination_channel: + type: string + description: identifies the channel end on the receiving chain. + data: + title: actual opaque bytes transferred directly to the application module + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + timeout_height: + title: block height after which the packet times out + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + timeout_timestamp: + title: block timestamp (in nanoseconds) after which the packet times + out + type: string + format: uint64 + proof_commitment: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + ibc.core.channel.v1.MsgRecvPacketResponse: + type: object + properties: + result: + title: ResponseResultType defines the possible outcomes of the execution + of a message + type: string + description: |- + - RESPONSE_RESULT_TYPE_UNSPECIFIED: Default zero value enumeration + - RESPONSE_RESULT_TYPE_NOOP: The message did not call the IBC application callbacks (because, for example, the packet had already been relayed) + - RESPONSE_RESULT_TYPE_SUCCESS: The message was executed successfully + default: RESPONSE_RESULT_TYPE_UNSPECIFIED + enum: + - RESPONSE_RESULT_TYPE_UNSPECIFIED + - RESPONSE_RESULT_TYPE_NOOP + - RESPONSE_RESULT_TYPE_SUCCESS + description: MsgRecvPacketResponse defines the Msg/RecvPacket response type. + ibc.core.channel.v1.MsgTimeout: + title: MsgTimeout receives timed-out packet + type: object + properties: + packet: + title: Packet defines a type that carries data across different chains through + IBC + type: object + properties: + sequence: + type: string + description: |- + number corresponds to the order of sends and receives, where a Packet + with an earlier sequence number must be sent and received before a Packet + with a later sequence number. + format: uint64 + source_port: + type: string + description: identifies the port on the sending chain. + source_channel: + type: string + description: identifies the channel end on the sending chain. + destination_port: + type: string + description: identifies the port on the receiving chain. + destination_channel: + type: string + description: identifies the channel end on the receiving chain. + data: + title: actual opaque bytes transferred directly to the application module + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + timeout_height: + title: block height after which the packet times out + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + timeout_timestamp: + title: block timestamp (in nanoseconds) after which the packet times + out + type: string + format: uint64 + proof_unreceived: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + next_sequence_recv: + type: string + format: uint64 + signer: + type: string + ibc.core.channel.v1.MsgTimeoutOnClose: + type: object + properties: + packet: + title: Packet defines a type that carries data across different chains through + IBC + type: object + properties: + sequence: + type: string + description: |- + number corresponds to the order of sends and receives, where a Packet + with an earlier sequence number must be sent and received before a Packet + with a later sequence number. + format: uint64 + source_port: + type: string + description: identifies the port on the sending chain. + source_channel: + type: string + description: identifies the channel end on the sending chain. + destination_port: + type: string + description: identifies the port on the receiving chain. + destination_channel: + type: string + description: identifies the channel end on the receiving chain. + data: + title: actual opaque bytes transferred directly to the application module + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + timeout_height: + title: block height after which the packet times out + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + timeout_timestamp: + title: block timestamp (in nanoseconds) after which the packet times + out + type: string + format: uint64 + proof_unreceived: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_close: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + next_sequence_recv: + type: string + format: uint64 + signer: + type: string + description: MsgTimeoutOnClose timed-out packet upon counterparty channel closure. + ibc.core.channel.v1.MsgTimeoutOnCloseResponse: + type: object + properties: + result: + title: ResponseResultType defines the possible outcomes of the execution + of a message + type: string + description: |- + - RESPONSE_RESULT_TYPE_UNSPECIFIED: Default zero value enumeration + - RESPONSE_RESULT_TYPE_NOOP: The message did not call the IBC application callbacks (because, for example, the packet had already been relayed) + - RESPONSE_RESULT_TYPE_SUCCESS: The message was executed successfully + default: RESPONSE_RESULT_TYPE_UNSPECIFIED + enum: + - RESPONSE_RESULT_TYPE_UNSPECIFIED + - RESPONSE_RESULT_TYPE_NOOP + - RESPONSE_RESULT_TYPE_SUCCESS + description: MsgTimeoutOnCloseResponse defines the Msg/TimeoutOnClose response + type. + ibc.core.channel.v1.MsgTimeoutResponse: + type: object + properties: + result: + title: ResponseResultType defines the possible outcomes of the execution + of a message + type: string + description: |- + - RESPONSE_RESULT_TYPE_UNSPECIFIED: Default zero value enumeration + - RESPONSE_RESULT_TYPE_NOOP: The message did not call the IBC application callbacks (because, for example, the packet had already been relayed) + - RESPONSE_RESULT_TYPE_SUCCESS: The message was executed successfully + default: RESPONSE_RESULT_TYPE_UNSPECIFIED + enum: + - RESPONSE_RESULT_TYPE_UNSPECIFIED + - RESPONSE_RESULT_TYPE_NOOP + - RESPONSE_RESULT_TYPE_SUCCESS + description: MsgTimeoutResponse defines the Msg/Timeout response type. + ibc.core.channel.v1.Order: + title: Order defines if a channel is ORDERED or UNORDERED + type: string + description: |- + - ORDER_NONE_UNSPECIFIED: zero-value for channel ordering + - ORDER_UNORDERED: packets can be delivered in any order, which may differ from the order in + which they were sent. + - ORDER_ORDERED: packets are delivered exactly in the order which they were sent + default: ORDER_NONE_UNSPECIFIED + enum: + - ORDER_NONE_UNSPECIFIED + - ORDER_UNORDERED + - ORDER_ORDERED + ibc.core.channel.v1.Packet: + title: Packet defines a type that carries data across different chains through + IBC + type: object + properties: + sequence: + type: string + description: |- + number corresponds to the order of sends and receives, where a Packet + with an earlier sequence number must be sent and received before a Packet + with a later sequence number. + format: uint64 + source_port: + type: string + description: identifies the port on the sending chain. + source_channel: + type: string + description: identifies the channel end on the sending chain. + destination_port: + type: string + description: identifies the port on the receiving chain. + destination_channel: + type: string + description: identifies the channel end on the receiving chain. + data: + title: actual opaque bytes transferred directly to the application module + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + timeout_height: + title: block height after which the packet times out + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + timeout_timestamp: + title: block timestamp (in nanoseconds) after which the packet times out + type: string + format: uint64 + ibc.core.channel.v1.ResponseResultType: + title: ResponseResultType defines the possible outcomes of the execution of + a message + type: string + description: |- + - RESPONSE_RESULT_TYPE_UNSPECIFIED: Default zero value enumeration + - RESPONSE_RESULT_TYPE_NOOP: The message did not call the IBC application callbacks (because, for example, the packet had already been relayed) + - RESPONSE_RESULT_TYPE_SUCCESS: The message was executed successfully + default: RESPONSE_RESULT_TYPE_UNSPECIFIED + enum: + - RESPONSE_RESULT_TYPE_UNSPECIFIED + - RESPONSE_RESULT_TYPE_NOOP + - RESPONSE_RESULT_TYPE_SUCCESS + ibc.core.channel.v1.State: + type: string + description: |- + State defines if a channel is in one of the following states: + CLOSED, INIT, TRYOPEN, OPEN or UNINITIALIZED. + + - STATE_UNINITIALIZED_UNSPECIFIED: Default State + - STATE_INIT: A channel has just started the opening handshake. + - STATE_TRYOPEN: A channel has acknowledged the handshake step on the counterparty chain. + - STATE_OPEN: A channel has completed the handshake. Open channels are + ready to send and receive packets. + - STATE_CLOSED: A channel has been closed and can no longer be used to send or receive + packets. + default: STATE_UNINITIALIZED_UNSPECIFIED + enum: + - STATE_UNINITIALIZED_UNSPECIFIED + - STATE_INIT + - STATE_TRYOPEN + - STATE_OPEN + - STATE_CLOSED + cosmos.upgrade.v1beta1.Plan: + type: object + properties: + name: + type: string + description: |- + Sets the name for the upgrade. This name will be used by the upgraded + version of the software to apply any special "on-upgrade" commands during + the first BeginBlock method after the upgrade is applied. It is also used + to detect whether a software version can handle a given upgrade. If no + upgrade handler with this name has been set in the software, it will be + assumed that the software is out-of-date when the upgrade Time or Height is + reached and the software will exit. + time: + type: string + description: |- + Deprecated: Time based upgrades have been deprecated. Time based upgrade logic + has been removed from the SDK. + If this field is not empty, an error will be thrown. + format: date-time + height: + type: string + description: The height at which the upgrade must be performed. + format: int64 + info: + title: |- + Any application specific upgrade info to be included on-chain + such as a git commit that validators could automatically upgrade to + type: string + upgraded_client_state: + type: object + additionalProperties: + type: object + description: |- + Deprecated: UpgradedClientState field has been deprecated. IBC upgrade logic has been + moved to the IBC module in the sub module 02-client. + If this field is not empty, an error will be thrown. + description: Plan specifies information about a planned upgrade and when it + should occur. + ibc.core.client.v1.MsgCreateClient: + title: MsgCreateClient defines a message to create an IBC client + type: object + properties: + client_state: + title: light client state + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + consensus_state: + type: object + additionalProperties: + type: object + description: |- + consensus state associated with the client that corresponds to a given + height. + signer: + title: signer address + type: string + ibc.core.client.v1.MsgCreateClientResponse: + type: object + description: MsgCreateClientResponse defines the Msg/CreateClient response type. + ibc.core.client.v1.MsgIBCSoftwareUpgrade: + title: MsgIBCSoftwareUpgrade defines the message used to schedule an upgrade + of an IBC client using a v1 governance proposal + type: object + properties: + plan: + type: object + properties: + name: + type: string + description: |- + Sets the name for the upgrade. This name will be used by the upgraded + version of the software to apply any special "on-upgrade" commands during + the first BeginBlock method after the upgrade is applied. It is also used + to detect whether a software version can handle a given upgrade. If no + upgrade handler with this name has been set in the software, it will be + assumed that the software is out-of-date when the upgrade Time or Height is + reached and the software will exit. + time: + type: string + description: |- + Deprecated: Time based upgrades have been deprecated. Time based upgrade logic + has been removed from the SDK. + If this field is not empty, an error will be thrown. + format: date-time + height: + type: string + description: The height at which the upgrade must be performed. + format: int64 + info: + title: |- + Any application specific upgrade info to be included on-chain + such as a git commit that validators could automatically upgrade to + type: string + upgraded_client_state: + type: object + additionalProperties: + type: object + description: |- + Deprecated: UpgradedClientState field has been deprecated. IBC upgrade logic has been + moved to the IBC module in the sub module 02-client. + If this field is not empty, an error will be thrown. + description: Plan specifies information about a planned upgrade and when + it should occur. + upgraded_client_state: + type: object + additionalProperties: + type: object + description: |- + An UpgradedClientState must be provided to perform an IBC breaking upgrade. + This will make the chain commit to the correct upgraded (self) client state + before the upgrade occurs, so that connecting chains can verify that the + new upgraded client is valid by verifying a proof on the previous version + of the chain. This will allow IBC connections to persist smoothly across + planned chain upgrades. Correspondingly, the UpgradedClientState field has been + deprecated in the Cosmos SDK to allow for this logic to exist solely in + the 02-client module. + signer: + title: signer address + type: string + ibc.core.client.v1.MsgIBCSoftwareUpgradeResponse: + type: object + description: MsgIBCSoftwareUpgradeResponse defines the Msg/IBCSoftwareUpgrade + response type. + ibc.core.client.v1.MsgRecoverClient: + type: object + properties: + subject_client_id: + title: the client identifier for the client to be updated if the proposal + passes + type: string + substitute_client_id: + title: |- + the substitute client identifier for the client which will replace the subject + client + type: string + signer: + title: signer address + type: string + description: MsgRecoverClient defines the message used to recover a frozen or + expired client. + ibc.core.client.v1.MsgRecoverClientResponse: + type: object + description: MsgRecoverClientResponse defines the Msg/RecoverClient response + type. + ibc.core.client.v1.MsgSubmitMisbehaviour: + type: object + properties: + client_id: + title: client unique identifier + type: string + misbehaviour: + title: misbehaviour used for freezing the light client + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + signer: + title: signer address + type: string + description: |- + MsgSubmitMisbehaviour defines an sdk.Msg type that submits Evidence for + light client misbehaviour. + This message has been deprecated. Use MsgUpdateClient instead. + ibc.core.client.v1.MsgSubmitMisbehaviourResponse: + type: object + description: |- + MsgSubmitMisbehaviourResponse defines the Msg/SubmitMisbehaviour response + type. + ibc.core.client.v1.MsgUpdateClient: + type: object + properties: + client_id: + title: client unique identifier + type: string + client_message: + title: client message to update the light client + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + signer: + title: signer address + type: string + description: |- + MsgUpdateClient defines an sdk.Msg to update a IBC client state using + the given client message. + ibc.core.client.v1.MsgUpdateClientResponse: + type: object + description: MsgUpdateClientResponse defines the Msg/UpdateClient response type. + ibc.core.client.v1.MsgUpdateParams: + type: object + properties: + signer: + title: signer address + type: string + params: + type: object + properties: + allowed_clients: + type: array + description: |- + allowed_clients defines the list of allowed client state types which can be created + and interacted with. If a client type is removed from the allowed clients list, usage + of this client will be disabled until it is added again to the list. + items: + type: string + description: |- + params defines the client parameters to update. + + NOTE: All parameters must be supplied. + description: MsgUpdateParams defines the sdk.Msg type to update the client parameters. + ibc.core.client.v1.MsgUpdateParamsResponse: + type: object + description: MsgUpdateParamsResponse defines the MsgUpdateParams response type. + ibc.core.client.v1.MsgUpgradeClient: + title: |- + MsgUpgradeClient defines an sdk.Msg to upgrade an IBC client to a new client + state + type: object + properties: + client_id: + title: client unique identifier + type: string + client_state: + title: upgraded client state + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + consensus_state: + title: |- + upgraded consensus state, only contains enough information to serve as a + basis of trust in update logic + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + proof_upgrade_client: + title: proof that old chain committed to new client + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_upgrade_consensus_state: + title: proof that old chain committed to new consensus state + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + signer: + title: signer address + type: string + ibc.core.client.v1.MsgUpgradeClientResponse: + type: object + description: MsgUpgradeClientResponse defines the Msg/UpgradeClient response + type. + ibc.core.client.v1.Params: + type: object + properties: + allowed_clients: + type: array + description: |- + allowed_clients defines the list of allowed client state types which can be created + and interacted with. If a client type is removed from the allowed clients list, usage + of this client will be disabled until it is added again to the list. + items: + type: string + description: Params defines the set of IBC light client parameters. + ibc.core.commitment.v1.MerklePrefix: + title: |- + MerklePrefix is merkle path prefixed to the key. + The constructed key from the Path and the key will be append(Path.KeyPath, + append(Path.KeyPrefix, key...)) + type: object + properties: + key_prefix: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + ibc.core.connection.v1.Counterparty: + type: object + properties: + client_id: + type: string + description: |- + identifies the client on the counterparty chain associated with a given + connection. + connection_id: + type: string + description: |- + identifies the connection end on the counterparty chain associated with a + given connection. + prefix: + title: |- + MerklePrefix is merkle path prefixed to the key. + The constructed key from the Path and the key will be append(Path.KeyPath, + append(Path.KeyPrefix, key...)) + type: object + properties: + key_prefix: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: commitment merkle prefix of the counterparty chain. + description: Counterparty defines the counterparty chain associated with a connection + end. + ibc.core.connection.v1.MsgConnectionOpenAck: + type: object + properties: + connection_id: + type: string + counterparty_connection_id: + type: string + version: + type: object + properties: + identifier: + title: unique version identifier + type: string + features: + title: list of features compatible with the specified identifier + type: array + items: + type: string + description: |- + Version defines the versioning scheme used to negotiate the IBC verison in + the connection handshake. + client_state: + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + proof_try: + title: |- + proof of the initialization the connection on Chain B: `UNITIALIZED -> + TRYOPEN` + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_client: + title: proof of client state included in message + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_consensus: + title: proof of client consensus state + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + consensus_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + host_consensus_state_proof: + title: optional proof data for host state machines that are unable to introspect + their own consensus state + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: |- + MsgConnectionOpenAck defines a msg sent by a Relayer to Chain A to + acknowledge the change of connection state to TRYOPEN on Chain B. + ibc.core.connection.v1.MsgConnectionOpenAckResponse: + type: object + description: MsgConnectionOpenAckResponse defines the Msg/ConnectionOpenAck + response type. + ibc.core.connection.v1.MsgConnectionOpenConfirm: + type: object + properties: + connection_id: + type: string + proof_ack: + title: "proof for the change of the connection state on Chain A: `INIT ->\ + \ OPEN`" + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + description: |- + MsgConnectionOpenConfirm defines a msg sent by a Relayer to Chain B to + acknowledge the change of connection state to OPEN on Chain A. + ibc.core.connection.v1.MsgConnectionOpenConfirmResponse: + type: object + description: |- + MsgConnectionOpenConfirmResponse defines the Msg/ConnectionOpenConfirm + response type. + ibc.core.connection.v1.MsgConnectionOpenInit: + type: object + properties: + client_id: + type: string + counterparty: + type: object + properties: + client_id: + type: string + description: |- + identifies the client on the counterparty chain associated with a given + connection. + connection_id: + type: string + description: |- + identifies the connection end on the counterparty chain associated with a + given connection. + prefix: + title: |- + MerklePrefix is merkle path prefixed to the key. + The constructed key from the Path and the key will be append(Path.KeyPath, + append(Path.KeyPrefix, key...)) + type: object + properties: + key_prefix: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: commitment merkle prefix of the counterparty chain. + description: Counterparty defines the counterparty chain associated with + a connection end. + version: + type: object + properties: + identifier: + title: unique version identifier + type: string + features: + title: list of features compatible with the specified identifier + type: array + items: + type: string + description: |- + Version defines the versioning scheme used to negotiate the IBC verison in + the connection handshake. + delay_period: + type: string + format: uint64 + signer: + type: string + description: |- + MsgConnectionOpenInit defines the msg sent by an account on Chain A to + initialize a connection with Chain B. + ibc.core.connection.v1.MsgConnectionOpenInitResponse: + type: object + description: |- + MsgConnectionOpenInitResponse defines the Msg/ConnectionOpenInit response + type. + ibc.core.connection.v1.MsgConnectionOpenTry: + type: object + properties: + client_id: + type: string + previous_connection_id: + type: string + description: "Deprecated: this field is unused. Crossing hellos are no longer\ + \ supported in core IBC." + client_state: + type: object + additionalProperties: + type: object + description: |- + `Any` contains an arbitrary serialized protocol buffer message along with a + URL that describes the type of the serialized message. + + Protobuf library provides support to pack/unpack Any values in the form + of utility functions or additional generated methods of the Any type. + + Example 1: Pack and unpack a message in C++. + + Foo foo = ...; + Any any; + any.PackFrom(foo); + ... + if (any.UnpackTo(&foo)) { + ... + } + + Example 2: Pack and unpack a message in Java. + + Foo foo = ...; + Any any = Any.pack(foo); + ... + if (any.is(Foo.class)) { + foo = any.unpack(Foo.class); + } + // or ... + if (any.isSameTypeAs(Foo.getDefaultInstance())) { + foo = any.unpack(Foo.getDefaultInstance()); + } + + Example 3: Pack and unpack a message in Python. + + foo = Foo(...) + any = Any() + any.Pack(foo) + ... + if any.Is(Foo.DESCRIPTOR): + any.Unpack(foo) + ... + + Example 4: Pack and unpack a message in Go + + foo := &pb.Foo{...} + any, err := anypb.New(foo) + if err != nil { + ... + } + ... + foo := &pb.Foo{} + if err := any.UnmarshalTo(foo); err != nil { + ... + } + + The pack methods provided by protobuf library will by default use + 'type.googleapis.com/full.type.name' as the type URL and the unpack + methods only use the fully qualified type name after the last '/' + in the type URL, for example "foo.bar.com/x/y.z" will yield type + name "y.z". + + JSON + + The JSON representation of an `Any` value uses the regular + representation of the deserialized, embedded message, with an + additional field `@type` which contains the type URL. Example: + + package google.profile; + message Person { + string first_name = 1; + string last_name = 2; + } + + { + "@type": "type.googleapis.com/google.profile.Person", + "firstName": , + "lastName": + } + + If the embedded message type is well-known and has a custom JSON + representation, that representation will be embedded adding a field + `value` which holds the custom JSON in addition to the `@type` + field. Example (for message [google.protobuf.Duration][]): + + { + "@type": "type.googleapis.com/google.protobuf.Duration", + "value": "1.212s" + } + counterparty: + type: object + properties: + client_id: + type: string + description: |- + identifies the client on the counterparty chain associated with a given + connection. + connection_id: + type: string + description: |- + identifies the connection end on the counterparty chain associated with a + given connection. + prefix: + title: |- + MerklePrefix is merkle path prefixed to the key. + The constructed key from the Path and the key will be append(Path.KeyPath, + append(Path.KeyPrefix, key...)) + type: object + properties: + key_prefix: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: commitment merkle prefix of the counterparty chain. + description: Counterparty defines the counterparty chain associated with + a connection end. + delay_period: + type: string + format: uint64 + counterparty_versions: + type: array + items: + type: object + properties: + identifier: + title: unique version identifier + type: string + features: + title: list of features compatible with the specified identifier + type: array + items: + type: string + description: |- + Version defines the versioning scheme used to negotiate the IBC verison in + the connection handshake. + proof_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + proof_init: + title: |- + proof of the initialization the connection on Chain A: `UNITIALIZED -> + INIT` + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_client: + title: proof of client state included in message + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof_consensus: + title: proof of client consensus state + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + consensus_height: + title: |- + Height is a monotonically increasing data type + that can be compared against another Height for the purposes of updating and + freezing clients + type: object + properties: + revision_number: + title: the revision that the client is currently on + type: string + format: uint64 + revision_height: + title: the height within the given revision + type: string + format: uint64 + description: |- + Normally the RevisionHeight is incremented at each height while keeping + RevisionNumber the same. However some consensus algorithms may choose to + reset the height in certain conditions e.g. hard forks, state-machine + breaking changes In these cases, the RevisionNumber is incremented so that + height continues to be monitonically increasing even as the RevisionHeight + gets reset + signer: + type: string + host_consensus_state_proof: + title: optional proof data for host state machines that are unable to introspect + their own consensus state + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + description: |- + MsgConnectionOpenTry defines a msg sent by a Relayer to try to open a + connection on Chain B. + ibc.core.connection.v1.MsgConnectionOpenTryResponse: + type: object + description: MsgConnectionOpenTryResponse defines the Msg/ConnectionOpenTry + response type. + ibc.core.connection.v1.MsgUpdateParams: + type: object + properties: + signer: + title: signer address + type: string + params: + type: object + properties: + max_expected_time_per_block: + type: string + description: |- + maximum expected time per block (in nanoseconds), used to enforce block delay. This parameter should reflect the + largest amount of time that the chain might reasonably take to produce the next block under normal operating + conditions. A safe choice is 3-5x the expected time per block. + format: uint64 + description: |- + params defines the connection parameters to update. + + NOTE: All parameters must be supplied. + description: MsgUpdateParams defines the sdk.Msg type to update the connection + parameters. + ibc.core.connection.v1.MsgUpdateParamsResponse: + type: object + description: MsgUpdateParamsResponse defines the MsgUpdateParams response type. + ibc.core.connection.v1.Params: + type: object + properties: + max_expected_time_per_block: + type: string + description: |- + maximum expected time per block (in nanoseconds), used to enforce block delay. This parameter should reflect the + largest amount of time that the chain might reasonably take to produce the next block under normal operating + conditions. A safe choice is 3-5x the expected time per block. + format: uint64 + description: Params defines the set of Connection parameters. + ibc.core.connection.v1.Version: + type: object + properties: + identifier: + title: unique version identifier + type: string + features: + title: list of features compatible with the specified identifier + type: array + items: + type: string + description: |- + Version defines the versioning scheme used to negotiate the IBC verison in + the connection handshake. + sourcehub.acp.AccessDecision: + title: AccessDecision models the result of evaluating a set of AccessRequests + for an Actor + type: object + properties: + id: + type: string + policy_id: + title: used as part of id generation + type: string + creator: + title: used as part of id generation + type: string + creator_acc_sequence: + title: used as part of id generation + type: string + format: uint64 + operations: + title: used as part of id generation + type: array + items: + type: object + properties: + object: + title: target object for operation + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + permission: + title: permission required to perform operation + type: string + description: Operation represents an action over an object. + actor_did: + title: used as part of id generation + type: string + actor: + title: used as part of id generation + type: string + params: + title: used as part of id generation + type: object + properties: + decision_expiration_delta: + title: number of blocks a Decision is valid for + type: string + format: uint64 + proof_expiration_delta: + title: number of blocks a DecisionProof is valid for + type: string + format: uint64 + ticket_expiration_delta: + title: number of blocks an AccessTicket is valid for + type: string + format: uint64 + creation_time: + type: string + format: date-time + issued_height: + title: issued_height stores the block height when the Decision was evaluated + type: string + format: uint64 + sourcehub.acp.AccessRequest: + title: AccessRequest represents the wish to perform a set of operations by an + actor + type: object + properties: + operations: + type: array + items: + type: object + properties: + object: + title: target object for operation + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + permission: + title: permission required to perform operation + type: string + description: Operation represents an action over an object. + actor: + title: actor requesting operations + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access requests to a + Policy. + sourcehub.acp.Actor: + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access requests to a Policy. + sourcehub.acp.ActorResource: + type: object + properties: + name: + type: string + doc: + type: string + relations: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors resource + type: string + relation_name: + title: relation_name scopes permissible actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + description: ActorResource represents a special Resource which is reserved for + Policy actors. + sourcehub.acp.ActorSet: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + relation: + type: string + description: |- + ActorSet represents a set of Actors in a Policy. + It is specified through an Object, Relation pair, which represents + all actors which have a relationship with given obj-rel pair. + This expansion is recursive. + sourcehub.acp.AllActors: + type: object + description: |- + AllActors models a special Relationship Subject which indicates + that all Actors in the Policy are included. + sourcehub.acp.DecisionParams: + title: DecisionParams stores auxiliary information regarding the validity of + a decision + type: object + properties: + decision_expiration_delta: + title: number of blocks a Decision is valid for + type: string + format: uint64 + proof_expiration_delta: + title: number of blocks a DecisionProof is valid for + type: string + format: uint64 + ticket_expiration_delta: + title: number of blocks an AccessTicket is valid for + type: string + format: uint64 + sourcehub.acp.MsgCheckAccess: + type: object + properties: + creator: + type: string + policy_id: + type: string + creation_time: + type: string + format: date-time + access_request: + title: AccessRequest represents the wish to perform a set of operations + by an actor + type: object + properties: + operations: + type: array + items: + type: object + properties: + object: + title: target object for operation + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access + controlled within a Policy. + permission: + title: permission required to perform operation + type: string + description: Operation represents an action over an object. + actor: + title: actor requesting operations + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access requests + to a Policy. + sourcehub.acp.MsgCheckAccessResponse: + type: object + properties: + decision: + title: AccessDecision models the result of evaluating a set of AccessRequests + for an Actor + type: object + properties: + id: + type: string + policy_id: + title: used as part of id generation + type: string + creator: + title: used as part of id generation + type: string + creator_acc_sequence: + title: used as part of id generation + type: string + format: uint64 + operations: + title: used as part of id generation + type: array + items: + type: object + properties: + object: + title: target object for operation + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access + controlled within a Policy. + permission: + title: permission required to perform operation + type: string + description: Operation represents an action over an object. + actor_did: + title: used as part of id generation + type: string + actor: + title: used as part of id generation + type: string + params: + title: used as part of id generation + type: object + properties: + decision_expiration_delta: + title: number of blocks a Decision is valid for + type: string + format: uint64 + proof_expiration_delta: + title: number of blocks a DecisionProof is valid for + type: string + format: uint64 + ticket_expiration_delta: + title: number of blocks an AccessTicket is valid for + type: string + format: uint64 + creation_time: + type: string + format: date-time + issued_height: + title: issued_height stores the block height when the Decision was evaluated + type: string + format: uint64 + sourcehub.acp.MsgCreatePolicy: + type: object + properties: + creator: + type: string + policy: + type: string + marshal_type: + type: string + description: |- + PolicyEncodingType enumerates supported marshaling types for policies. + + - UNKNOWN: Fallback value for a missing Marshaling Type + - SHORT_YAML: Policy Marshaled as a YAML Short Policy definition + - SHORT_JSON: Policy Marshaled as a JSON Short Policy definition + default: UNKNOWN + enum: + - UNKNOWN + - SHORT_YAML + - SHORT_JSON + creation_time: + type: string + format: date-time + sourcehub.acp.MsgCreatePolicyResponse: + type: object + properties: + policy: + type: object + properties: + id: + type: string + name: + type: string + description: + type: string + creation_time: + type: string + format: date-time + attributes: + type: object + additionalProperties: + type: string + resources: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + permissions: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + expression: + type: string + description: |- + Permission models a special type of Relation which is evaluated at runtime. + A permission often maps to an operation defined for a resource which an actor may attempt. + relations: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors resource + type: string + relation_name: + title: relation_name scopes permissible actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + description: |- + Resource models a namespace for objects in a Policy. + Appications will have multiple entities which they must manage such as files or groups. + A Resource represents a set of entities of a certain type. + actor_resource: + type: object + properties: + name: + type: string + doc: + type: string + relations: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors resource + type: string + relation_name: + title: relation_name scopes permissible actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + description: ActorResource represents a special Resource which is reserved + for Policy actors. + creator: + type: string + description: |- + Policy represents an ACP module Policy definition. + Each Policy defines a set of high level rules over how the acces control system + should behave. + sourcehub.acp.MsgDeleteRelationship: + type: object + properties: + creator: + type: string + policy_id: + type: string + relationship: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + relation: + type: string + subject: + type: object + properties: + actor: + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access requests + to a Policy. + actor_set: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access + controlled within a Policy. + relation: + type: string + description: |- + ActorSet represents a set of Actors in a Policy. + It is specified through an Object, Relation pair, which represents + all actors which have a relationship with given obj-rel pair. + This expansion is recursive. + all_actors: + type: object + properties: {} + description: |- + AllActors models a special Relationship Subject which indicates + that all Actors in the Policy are included. + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + description: Subject specifies the target of a Relationship. + description: |- + Relationship models an access control rule. + It states that the given subject has relation with object. + sourcehub.acp.MsgDeleteRelationshipResponse: + type: object + properties: + record_found: + type: boolean + sourcehub.acp.MsgRegisterObject: + type: object + properties: + creator: + type: string + policy_id: + type: string + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + creation_time: + type: string + format: date-time + sourcehub.acp.MsgRegisterObjectResponse: + type: object + properties: + result: + title: RegistrationResult encodes the possible result set from Registering + an Object + type: string + description: |- + - NoOp: NoOp indicates no action was take. The operation failed or the Object already existed and was active + - Registered: Registered indicates the Object was sucessfuly registered to the Actor. + - Unarchived: Unarchived indicates that a previously deleted Object is active again. + Only the original owners can Unarchive an object. + default: NoOp + enum: + - NoOp + - Registered + - Unarchived + sourcehub.acp.MsgSetRelationship: + type: object + properties: + creator: + type: string + policy_id: + type: string + creation_time: + type: string + format: date-time + relationship: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + relation: + type: string + subject: + type: object + properties: + actor: + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access requests + to a Policy. + actor_set: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access + controlled within a Policy. + relation: + type: string + description: |- + ActorSet represents a set of Actors in a Policy. + It is specified through an Object, Relation pair, which represents + all actors which have a relationship with given obj-rel pair. + This expansion is recursive. + all_actors: + type: object + properties: {} + description: |- + AllActors models a special Relationship Subject which indicates + that all Actors in the Policy are included. + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + description: Subject specifies the target of a Relationship. + description: |- + Relationship models an access control rule. + It states that the given subject has relation with object. + sourcehub.acp.MsgSetRelationshipResponse: + type: object + properties: + record_existed: + title: "Indicates whether the given Relationship previously existed, ie\ + \ the Tx was a no op" + type: boolean + sourcehub.acp.MsgUnregisterObject: + type: object + properties: + creator: + type: string + policy_id: + type: string + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + sourcehub.acp.MsgUnregisterObjectResponse: + type: object + properties: + found: + type: boolean + sourcehub.acp.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: {} + description: "NOTE: All parameters must be supplied." + description: MsgUpdateParams is the Msg/UpdateParams request type. + sourcehub.acp.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + sourcehub.acp.Object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled within + a Policy. + sourcehub.acp.Operation: + type: object + properties: + object: + title: target object for operation + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + permission: + title: permission required to perform operation + type: string + description: Operation represents an action over an object. + sourcehub.acp.Params: + type: object + description: Params defines the parameters for the module. + sourcehub.acp.Permission: + type: object + properties: + name: + type: string + doc: + type: string + expression: + type: string + description: |- + Permission models a special type of Relation which is evaluated at runtime. + A permission often maps to an operation defined for a resource which an actor may attempt. + sourcehub.acp.Policy: + type: object + properties: + id: + type: string + name: + type: string + description: + type: string + creation_time: + type: string + format: date-time + attributes: + type: object + additionalProperties: + type: string + resources: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + permissions: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + expression: + type: string + description: |- + Permission models a special type of Relation which is evaluated at runtime. + A permission often maps to an operation defined for a resource which an actor may attempt. + relations: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors resource + type: string + relation_name: + title: relation_name scopes permissible actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + description: |- + Resource models a namespace for objects in a Policy. + Appications will have multiple entities which they must manage such as files or groups. + A Resource represents a set of entities of a certain type. + actor_resource: + type: object + properties: + name: + type: string + doc: + type: string + relations: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors resource + type: string + relation_name: + title: relation_name scopes permissible actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + description: ActorResource represents a special Resource which is reserved + for Policy actors. + creator: + type: string + description: |- + Policy represents an ACP module Policy definition. + Each Policy defines a set of high level rules over how the acces control system + should behave. + sourcehub.acp.PolicyMarshalingType: + type: string + description: |- + PolicyEncodingType enumerates supported marshaling types for policies. + + - UNKNOWN: Fallback value for a missing Marshaling Type + - SHORT_YAML: Policy Marshaled as a YAML Short Policy definition + - SHORT_JSON: Policy Marshaled as a JSON Short Policy definition + default: UNKNOWN + enum: + - UNKNOWN + - SHORT_YAML + - SHORT_JSON + sourcehub.acp.RegistrationResult: + title: RegistrationResult encodes the possible result set from Registering an + Object + type: string + description: |- + - NoOp: NoOp indicates no action was take. The operation failed or the Object already existed and was active + - Registered: Registered indicates the Object was sucessfuly registered to the Actor. + - Unarchived: Unarchived indicates that a previously deleted Object is active again. + Only the original owners can Unarchive an object. + default: NoOp + enum: + - NoOp + - Registered + - Unarchived + sourcehub.acp.Relation: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors resource + type: string + relation_name: + title: relation_name scopes permissible actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + sourcehub.acp.Relationship: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + relation: + type: string + subject: + type: object + properties: + actor: + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access requests + to a Policy. + actor_set: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + relation: + type: string + description: |- + ActorSet represents a set of Actors in a Policy. + It is specified through an Object, Relation pair, which represents + all actors which have a relationship with given obj-rel pair. + This expansion is recursive. + all_actors: + type: object + properties: {} + description: |- + AllActors models a special Relationship Subject which indicates + that all Actors in the Policy are included. + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + description: Subject specifies the target of a Relationship. + description: |- + Relationship models an access control rule. + It states that the given subject has relation with object. + sourcehub.acp.Resource: + type: object + properties: + name: + type: string + doc: + type: string + permissions: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + expression: + type: string + description: |- + Permission models a special type of Relation which is evaluated at runtime. + A permission often maps to an operation defined for a resource which an actor may attempt. + relations: + type: array + items: + type: object + properties: + name: + type: string + doc: + type: string + manages: + title: list of relations managed by the current relation + type: array + items: + type: string + vr_types: + title: value restriction types + type: array + items: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors resource + type: string + relation_name: + title: relation_name scopes permissible actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + description: |- + Resource models a namespace for objects in a Policy. + Appications will have multiple entities which they must manage such as files or groups. + A Resource represents a set of entities of a certain type. + sourcehub.acp.Restriction: + type: object + properties: + resource_name: + title: resource_name scopes permissible actors resource + type: string + relation_name: + title: relation_name scopes permissible actors relation + type: string + description: |- + Restriction models a specification which a Relationship's actor + should meet. + sourcehub.acp.Subject: + type: object + properties: + actor: + type: object + properties: + id: + type: string + description: Actor represents an entity which makes access requests to a + Policy. + actor_set: + type: object + properties: + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + relation: + type: string + description: |- + ActorSet represents a set of Actors in a Policy. + It is specified through an Object, Relation pair, which represents + all actors which have a relationship with given obj-rel pair. + This expansion is recursive. + all_actors: + type: object + properties: {} + description: |- + AllActors models a special Relationship Subject which indicates + that all Actors in the Policy are included. + object: + type: object + properties: + resource: + type: string + id: + type: string + description: Object represents an entity which must be access controlled + within a Policy. + description: Subject specifies the target of a Relationship. + sourcehub.bulletin.MsgCreatePost: + type: object + properties: + creator: + type: string + namespace: + type: string + payload: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + proof: + pattern: "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$" + type: string + format: byte + sourcehub.bulletin.MsgCreatePostResponse: + type: object + sourcehub.bulletin.MsgUpdateParams: + type: object + properties: + authority: + type: string + description: authority is the address that controls the module (defaults + to x/gov unless overwritten). + params: + type: object + properties: {} + description: "NOTE: All parameters must be supplied." + description: MsgUpdateParams is the Msg/UpdateParams request type. + sourcehub.bulletin.MsgUpdateParamsResponse: + type: object + description: |- + MsgUpdateParamsResponse defines the response structure for executing a + MsgUpdateParams message. + sourcehub.bulletin.Params: + type: object + description: Params defines the parameters for the module. +x-original-swagger-version: "2.0" diff --git a/package-lock.json b/package-lock.json index 8f93c3c..246052a 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,11 +1,11 @@ { - "name": "website", + "name": "docs.source.network", "version": "0.0.0", "lockfileVersion": 3, "requires": true, "packages": { "": { - "name": "website", + "name": "docs.source.network", "version": "0.0.0", "dependencies": { "@docusaurus/core": "2.4.0", @@ -14,11 +14,13 @@ "@svgr/webpack": "^7.0.0", "clsx": "^1.2.1", "docusaurus-plugin-sass": "^0.2.3", + "docusaurus-preset-openapi": "^0.6.4", "prism-react-renderer": "^1.3.5", "react": "^17.0.2", "react-dom": "^17.0.2", "react-icons": "^4.8.0", - "sass": "^1.60.0" + "sass": "^1.60.0", + "url": "^0.11.3" }, "devDependencies": { "@docusaurus/module-type-aliases": "^2.4.0", @@ -3290,6 +3292,11 @@ "node": ">=10.13.0" } }, + "node_modules/@faker-js/faker": { + "version": "5.5.3", + "resolved": "https://registry.npmjs.org/@faker-js/faker/-/faker-5.5.3.tgz", + "integrity": "sha512-R11tGE6yIFwqpaIqcfkcg7AICXzFg14+5h5v0TfF/9+RMDL6jhzCy/pxHVOfbALGdtVYdt6JdR21tuxEgl34dw==" + }, "node_modules/@hapi/hoek": { "version": "9.3.0", "resolved": "https://registry.npmjs.org/@hapi/hoek/-/hoek-9.3.0.tgz", @@ -3539,6 +3546,30 @@ "url": "https://opencollective.com/unified" } }, + "node_modules/@monaco-editor/loader": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/@monaco-editor/loader/-/loader-1.4.0.tgz", + "integrity": "sha512-00ioBig0x642hytVspPl7DbQyaSWRaolYie/UFNjoTdvoKPzo6xrXLhTk9ixgIKcLH5b5vDOjVNiGyY+uDCUlg==", + "dependencies": { + "state-local": "^1.0.6" + }, + "peerDependencies": { + "monaco-editor": ">= 0.21.0 < 1" + } + }, + "node_modules/@monaco-editor/react": { + "version": "4.6.0", + "resolved": "https://registry.npmjs.org/@monaco-editor/react/-/react-4.6.0.tgz", + "integrity": "sha512-RFkU9/i7cN2bsq/iTkurMWOEErmYcY6JiQI3Jn+WeR/FGISH8JbHERjpS9oRuSOPvDMJI0Z8nJeKkbOs9sBYQw==", + "dependencies": { + "@monaco-editor/loader": "^1.4.0" + }, + "peerDependencies": { + "monaco-editor": ">= 0.25.0 < 1", + "react": "^16.8.0 || ^17.0.0 || ^18.0.0", + "react-dom": "^16.8.0 || ^17.0.0 || ^18.0.0" + } + }, "node_modules/@nodelib/fs.scandir": { "version": "2.1.5", "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", @@ -3586,6 +3617,29 @@ "resolved": "https://registry.npmjs.org/@polka/url/-/url-1.0.0-next.21.tgz", "integrity": "sha512-a5Sab1C4/icpTZVzZc5Ghpz88yQtGOyNqYXcZgOssB2uuAr+wF/MvN6bgtW32q7HHrvBki+BsZ0OuNv6EV3K9g==" }, + "node_modules/@reduxjs/toolkit": { + "version": "1.9.7", + "resolved": "https://registry.npmjs.org/@reduxjs/toolkit/-/toolkit-1.9.7.tgz", + "integrity": "sha512-t7v8ZPxhhKgOKtU+uyJT13lu4vL7az5aFi4IdoDs/eS548edn2M8Ik9h8fxgvMjGoAUVFSt6ZC1P5cWmQ014QQ==", + "dependencies": { + "immer": "^9.0.21", + "redux": "^4.2.1", + "redux-thunk": "^2.4.2", + "reselect": "^4.1.8" + }, + "peerDependencies": { + "react": "^16.9.0 || ^17.0.0 || ^18", + "react-redux": "^7.2.1 || ^8.0.2" + }, + "peerDependenciesMeta": { + "react": { + "optional": true + }, + "react-redux": { + "optional": true + } + } + }, "node_modules/@sideway/address": { "version": "4.1.4", "resolved": "https://registry.npmjs.org/@sideway/address/-/address-4.1.4.tgz", @@ -3985,6 +4039,15 @@ "resolved": "https://registry.npmjs.org/@types/history/-/history-4.7.11.tgz", "integrity": "sha512-qjDJRrmvBMiTx+jyLxvLfJU7UznFuokDv4f3WRuriHKERccVpFU+8XMQUAbDzoiJCsmexxRExQeMwwCdamSKDA==" }, + "node_modules/@types/hoist-non-react-statics": { + "version": "3.3.5", + "resolved": "https://registry.npmjs.org/@types/hoist-non-react-statics/-/hoist-non-react-statics-3.3.5.tgz", + "integrity": "sha512-SbcrWzkKBw2cdwRTwQAswfpB9g9LJWfjtUeW/jvNwbhC8cpmmNYVePa+ncbUe0rGTQ7G3Ff6mYUN2VMfLVr+Sg==", + "dependencies": { + "@types/react": "*", + "hoist-non-react-statics": "^3.3.0" + } + }, "node_modules/@types/html-minifier-terser": { "version": "6.1.0", "resolved": "https://registry.npmjs.org/@types/html-minifier-terser/-/html-minifier-terser-6.1.0.tgz", @@ -4077,6 +4140,17 @@ "csstype": "^3.0.2" } }, + "node_modules/@types/react-redux": { + "version": "7.1.33", + "resolved": "https://registry.npmjs.org/@types/react-redux/-/react-redux-7.1.33.tgz", + "integrity": "sha512-NF8m5AjWCkert+fosDsN3hAlHzpjSiXlVy9EgQEmLoBhaNXbmyeGs/aj5dQzKuF+/q+S7JQagorGDW8pJ28Hmg==", + "dependencies": { + "@types/hoist-non-react-statics": "^3.3.0", + "@types/react": "*", + "hoist-non-react-statics": "^3.3.0", + "redux": "^4.0.0" + } + }, "node_modules/@types/react-router": { "version": "5.1.20", "resolved": "https://registry.npmjs.org/@types/react-router/-/react-router-5.1.20.tgz", @@ -4579,11 +4653,29 @@ "node": ">=8" } }, + "node_modules/array-uniq": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/array-uniq/-/array-uniq-1.0.3.tgz", + "integrity": "sha512-MNha4BWQ6JbwhFhj03YK552f7cb3AzoE8SzeljgChvL1dl3IcvggXVz1DilzySZkCja+CXuZbdW7yATchWn8/Q==", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/asap": { "version": "2.0.6", "resolved": "https://registry.npmjs.org/asap/-/asap-2.0.6.tgz", "integrity": "sha512-BSHWgDSAiKs50o2Re8ppvp3seVHXSRM44cdSsT9FfNEUUZLOGWVCsiWaRPWM1Znn+mqZ1OfVZ3z3DWEzSp7hRA==" }, + "node_modules/async": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/async/-/async-3.2.0.tgz", + "integrity": "sha512-TR2mEZFVOj2pLStYxLht7TyfuRzaydfpxr3k9RpHIzMgw7A64dzsdqCxH1WJyQdoe8T10nDXd9wnEigmiuHIZw==" + }, + "node_modules/asynckit": { + "version": "0.4.0", + "resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz", + "integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==" + }, "node_modules/at-least-node": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/at-least-node/-/at-least-node-1.0.0.tgz", @@ -4759,6 +4851,25 @@ "resolved": "https://registry.npmjs.org/base16/-/base16-1.0.0.tgz", "integrity": "sha512-pNdYkNPiJUnEhnfXV56+sQy8+AaPcG3POZAUnwr4EeqCUZFz4u2PePbo3e5Gj4ziYPCWGUZT9RHisvJKnwFuBQ==" }, + "node_modules/base64-js": { + "version": "1.5.1", + "resolved": "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz", + "integrity": "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ] + }, "node_modules/batch": { "version": "0.6.1", "resolved": "https://registry.npmjs.org/batch/-/batch-0.6.1.tgz", @@ -4908,6 +5019,29 @@ "node": "^6 || ^7 || ^8 || ^9 || ^10 || ^11 || ^12 || >=13.7" } }, + "node_modules/buffer": { + "version": "6.0.3", + "resolved": "https://registry.npmjs.org/buffer/-/buffer-6.0.3.tgz", + "integrity": "sha512-FTiCpNxtwiZZHEZbcbTIcZjERVICn9yq/pDFkTl95/AxzD1naBctN7YO68riM/gLSDY7sdrMby8hofADYuuqOA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "dependencies": { + "base64-js": "^1.3.1", + "ieee754": "^1.2.1" + } + }, "node_modules/buffer-from": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz", @@ -4969,12 +5103,18 @@ } }, "node_modules/call-bind": { - "version": "1.0.2", - "resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.2.tgz", - "integrity": "sha512-7O+FbCihrB5WGbFYesctwmTKae6rOiIzmz1icreWJ+0aA7LJfuqhEso2T9ncpcFtzMQtzXf2QGGueWJGTYsqrA==", + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.7.tgz", + "integrity": "sha512-GHTSNSYICQ7scH7sZ+M2rFopRoLh8t2bLSW6BbgrtLsahOIB5iyAVJf9GjWK3cYTDaMj4XdBpM1cA6pIS0Kv2w==", "dependencies": { - "function-bind": "^1.1.1", - "get-intrinsic": "^1.0.2" + "es-define-property": "^1.0.0", + "es-errors": "^1.3.0", + "function-bind": "^1.1.2", + "get-intrinsic": "^1.2.4", + "set-function-length": "^1.2.1" + }, + "engines": { + "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" @@ -5093,6 +5233,14 @@ "url": "https://github.com/sponsors/wooorm" } }, + "node_modules/charset": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/charset/-/charset-1.0.1.tgz", + "integrity": "sha512-6dVyOOYjpfFcL1Y4qChrAoQLRHvj2ziyhcm0QJlhOcAhykL/k1kTUPbeo+87MNRTRdk2OIIsIXbuF3x2wi5EXg==", + "engines": { + "node": ">=4.0.0" + } + }, "node_modules/cheerio": { "version": "1.0.0-rc.12", "resolved": "https://registry.npmjs.org/cheerio/-/cheerio-1.0.0-rc.12.tgz", @@ -5362,6 +5510,17 @@ "node": ">=10" } }, + "node_modules/combined-stream": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz", + "integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==", + "dependencies": { + "delayed-stream": "~1.0.0" + }, + "engines": { + "node": ">= 0.8" + } + }, "node_modules/comma-separated-tokens": { "version": "1.0.8", "resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-1.0.8.tgz", @@ -5384,6 +5543,14 @@ "resolved": "https://registry.npmjs.org/commondir/-/commondir-1.0.1.tgz", "integrity": "sha512-W9pAhw0ja1Edb5GVdIF1mjZw/ASI0AlShXM83UUGe2DVr5TdAPEA1OA8m/g8zWp9x6On7gqufY+FatDbC3MDQg==" }, + "node_modules/component-emitter": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.3.1.tgz", + "integrity": "sha512-T0+barUSQRTUQASh8bx02dl+DhF54GtIDY13Y3m9oWTklKbb3Wv974meRpeZ3lp1JpLVECWWNHC4vaG2XHXouQ==", + "funding": { + "url": "https://github.com/sponsors/sindresorhus" + } + }, "node_modules/compressible": { "version": "2.0.18", "resolved": "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz", @@ -5506,6 +5673,11 @@ "resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz", "integrity": "sha512-QADzlaHc8icV8I7vbaJXJwod9HWYp8uCqf1xa4OfNu1T7JVxQIrUgOWtHdNDtPiywmFbiS12VjotIXLrKM3orQ==" }, + "node_modules/cookiejar": { + "version": "2.1.4", + "resolved": "https://registry.npmjs.org/cookiejar/-/cookiejar-2.1.4.tgz", + "integrity": "sha512-LDx6oHrK+PhzLKJU9j5S7/Y3jM/mUHvD/DeI1WQmJn652iPC5Y4TBzC9l+5OMOXlyTTA+SmVUPm0HQUwpD5Jqw==" + }, "node_modules/copy-text-to-clipboard": { "version": "3.1.0", "resolved": "https://registry.npmjs.org/copy-text-to-clipboard/-/copy-text-to-clipboard-3.1.0.tgz", @@ -5704,6 +5876,11 @@ "node": ">= 8" } }, + "node_modules/crypto-js": { + "version": "4.2.0", + "resolved": "https://registry.npmjs.org/crypto-js/-/crypto-js-4.2.0.tgz", + "integrity": "sha512-KALDyEYgpY+Rlob/iriUtjV6d5Eq+Y191A5g4UqLAi8CyGP9N1+FdVbkc1SxKc2r4YAYqG8JzO2KGL+AizD70Q==" + }, "node_modules/crypto-random-string": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/crypto-random-string/-/crypto-random-string-2.0.0.tgz", @@ -6032,6 +6209,14 @@ } } }, + "node_modules/decamelize": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz", + "integrity": "sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA==", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/decompress-response": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/decompress-response/-/decompress-response-3.3.0.tgz", @@ -6075,6 +6260,22 @@ "resolved": "https://registry.npmjs.org/defer-to-connect/-/defer-to-connect-1.1.3.tgz", "integrity": "sha512-0ISdNousHvZT2EiFlZeZAHBUvSxmKswVCEf8hW7KWgG4a8MVEu/3Vb6uWYozkjylyCxe0JBIiRB1jV45S70WVQ==" }, + "node_modules/define-data-property": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/define-data-property/-/define-data-property-1.1.4.tgz", + "integrity": "sha512-rBMvIzlpA8v6E+SJZoo++HAYqsLrkg7MSfIinMPFhmkorw7X+dOXVJQs+QT69zGkzMyfDnIMN2Wid1+NbL3T+A==", + "dependencies": { + "es-define-property": "^1.0.0", + "es-errors": "^1.3.0", + "gopd": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, "node_modules/define-lazy-prop": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/define-lazy-prop/-/define-lazy-prop-2.0.0.tgz", @@ -6119,6 +6320,14 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/delayed-stream": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz", + "integrity": "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==", + "engines": { + "node": ">=0.4.0" + } + }, "node_modules/depd": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/depd/-/depd-2.0.0.tgz", @@ -6195,6 +6404,15 @@ "resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz", "integrity": "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A==" }, + "node_modules/dezalgo": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/dezalgo/-/dezalgo-1.0.4.tgz", + "integrity": "sha512-rXSP0bf+5n0Qonsb+SVVfNfIsimO4HEtmnIpPHY8Q1UCzKlQrDMfdobr8nJOOsRgWCyMRqeSBQzmWUMq7zvVig==", + "dependencies": { + "asap": "^2.0.0", + "wrappy": "1" + } + }, "node_modules/dir-glob": { "version": "3.0.1", "resolved": "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz", @@ -6222,6 +6440,66 @@ "node": ">=6" } }, + "node_modules/docusaurus-plugin-openapi": { + "version": "0.6.4", + "resolved": "https://registry.npmjs.org/docusaurus-plugin-openapi/-/docusaurus-plugin-openapi-0.6.4.tgz", + "integrity": "sha512-RkJ68mndhbpx7x1Dukj6BUgUqQEuL5Iv3FFiVIxSCVFw5IuIk+5Oo7tiKbIUMfUHPbvmGSp4839JBTmM99153Q==", + "dependencies": { + "@docusaurus/mdx-loader": "^2.0.0", + "@docusaurus/plugin-content-docs": "^2.0.0", + "@docusaurus/utils": "^2.0.0", + "@docusaurus/utils-validation": "^2.0.0", + "axios": "^0.26.1", + "chalk": "^4.1.2", + "clsx": "^1.1.1", + "fs-extra": "^9.0.1", + "js-yaml": "^4.1.0", + "json-refs": "^3.0.15", + "json-schema-resolve-allof": "^1.5.0", + "lodash": "^4.17.20", + "openapi-to-postmanv2": "^1.2.1", + "postman-collection": "^4.1.0", + "remark-admonitions": "^1.2.1", + "webpack": "^5.73.0" + }, + "engines": { + "node": ">=14" + }, + "peerDependencies": { + "react": "^16.8.4 || ^17.0.0", + "react-dom": "^16.8.4 || ^17.0.0" + } + }, + "node_modules/docusaurus-plugin-openapi/node_modules/axios": { + "version": "0.26.1", + "resolved": "https://registry.npmjs.org/axios/-/axios-0.26.1.tgz", + "integrity": "sha512-fPwcX4EvnSHuInCMItEhAGnaSEXRBjtzh9fOtsE6E1G6p7vl7edEeZe11QHf18+6+9gR5PbKV/sGKNaD8YaMeA==", + "dependencies": { + "follow-redirects": "^1.14.8" + } + }, + "node_modules/docusaurus-plugin-openapi/node_modules/fs-extra": { + "version": "9.1.0", + "resolved": "https://registry.npmjs.org/fs-extra/-/fs-extra-9.1.0.tgz", + "integrity": "sha512-hcg3ZmepS30/7BSFqRvoo3DOMQu7IjqxO5nCDt+zM9XWjb33Wg7ziNT+Qvqbuc3+gWpzO02JubVyk2G4Zvo1OQ==", + "dependencies": { + "at-least-node": "^1.0.0", + "graceful-fs": "^4.2.0", + "jsonfile": "^6.0.1", + "universalify": "^2.0.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/docusaurus-plugin-proxy": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/docusaurus-plugin-proxy/-/docusaurus-plugin-proxy-0.6.3.tgz", + "integrity": "sha512-HAR76IsuSWlVI1K6P8fEJDjhHxT3LLdXGr+ZxNBm6DJTUQ8Xf057nHR8BhB5sfwmzrDPup5wChP/nuOVAfU6wg==", + "engines": { + "node": ">=14" + } + }, "node_modules/docusaurus-plugin-sass": { "version": "0.2.3", "resolved": "https://registry.npmjs.org/docusaurus-plugin-sass/-/docusaurus-plugin-sass-0.2.3.tgz", @@ -6234,6 +6512,57 @@ "sass": "^1.30.0" } }, + "node_modules/docusaurus-preset-openapi": { + "version": "0.6.4", + "resolved": "https://registry.npmjs.org/docusaurus-preset-openapi/-/docusaurus-preset-openapi-0.6.4.tgz", + "integrity": "sha512-jSgc23SDp13AHqFu4ehsCIRWwHNkQu9llKV56s3Ik2x7B7hWTQtVXBScz+m3qfXClztEx/XSF5nbZ55OfutyPA==", + "dependencies": { + "@docusaurus/preset-classic": "^2.0.0", + "docusaurus-plugin-openapi": "^0.6.4", + "docusaurus-plugin-proxy": "^0.6.3", + "docusaurus-theme-openapi": "^0.6.4" + }, + "engines": { + "node": ">=14" + }, + "peerDependencies": { + "react": "^16.8.4 || ^17.0.0", + "react-dom": "^16.8.4 || ^17.0.0" + } + }, + "node_modules/docusaurus-theme-openapi": { + "version": "0.6.4", + "resolved": "https://registry.npmjs.org/docusaurus-theme-openapi/-/docusaurus-theme-openapi-0.6.4.tgz", + "integrity": "sha512-j+KZTo8f/jtIQ13WVsXUybOXsdUfKabJLE8Wi/RbVacVhB7WSR2in1wy4/gBmQ7xTPptedJOZm2cRB72dXgbiw==", + "dependencies": { + "@docusaurus/theme-common": "^2.0.0", + "@mdx-js/react": "^1.6.22", + "@monaco-editor/react": "^4.3.1", + "@reduxjs/toolkit": "^1.7.1", + "buffer": "^6.0.3", + "clsx": "^1.1.1", + "crypto-js": "^4.1.1", + "docusaurus-plugin-openapi": "^0.6.4", + "immer": "^9.0.7", + "lodash": "^4.17.20", + "monaco-editor": "^0.31.1", + "postman-code-generators": "^1.0.0", + "postman-collection": "^4.1.0", + "prism-react-renderer": "^1.2.1", + "process": "^0.11.10", + "react-magic-dropzone": "^1.0.1", + "react-redux": "^7.2.0", + "redux-devtools-extension": "^2.13.8", + "webpack": "^5.73.0" + }, + "engines": { + "node": ">=14" + }, + "peerDependencies": { + "react": "^16.8.4 || ^17.0.0", + "react-dom": "^16.8.4 || ^17.0.0" + } + }, "node_modules/dom-converter": { "version": "0.2.0", "resolved": "https://registry.npmjs.org/dom-converter/-/dom-converter-0.2.0.tgz", @@ -6415,6 +6744,25 @@ "is-arrayish": "^0.2.1" } }, + "node_modules/es-define-property": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.0.tgz", + "integrity": "sha512-jxayLKShrEqqzJ0eumQbVhTYQM27CfT1T35+gCgDFoL82JLsXqTJ76zv6A0YLOgEnLUMvLzsDsGIrl8NFpT2gQ==", + "dependencies": { + "get-intrinsic": "^1.2.4" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/es-errors": { + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz", + "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", + "engines": { + "node": ">= 0.4" + } + }, "node_modules/es-module-lexer": { "version": "0.9.3", "resolved": "https://registry.npmjs.org/es-module-lexer/-/es-module-lexer-0.9.3.tgz", @@ -6687,6 +7035,11 @@ "node": ">=0.10.0" } }, + "node_modules/faker": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/faker/-/faker-5.1.0.tgz", + "integrity": "sha512-RrWKFSSA/aNLP0g3o2WW1Zez7/MnMr7xkiZmoCfAGZmdkDQZ6l2KtuXHN5XjdvpRjDl8+3vf+Rrtl06Z352+Mw==" + }, "node_modules/fast-deep-equal": { "version": "3.1.3", "resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", @@ -6712,6 +7065,11 @@ "resolved": "https://registry.npmjs.org/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==" }, + "node_modules/fast-safe-stringify": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/fast-safe-stringify/-/fast-safe-stringify-2.1.1.tgz", + "integrity": "sha512-W+KJc2dmILlPplD/H4K9l9LcAHAfPtP6BY84uVLXQ6Evcz9Lcg33Y2z1IVblT6xdY54PXYVHEv+0Wpq8Io6zkA==" + }, "node_modules/fast-url-parser": { "version": "1.1.3", "resolved": "https://registry.npmjs.org/fast-url-parser/-/fast-url-parser-1.1.3.tgz", @@ -6813,6 +7171,14 @@ "url": "https://opencollective.com/webpack" } }, + "node_modules/file-type": { + "version": "3.9.0", + "resolved": "https://registry.npmjs.org/file-type/-/file-type-3.9.0.tgz", + "integrity": "sha512-RLoqTXE8/vPmMuTI88DAzhMYC99I8BWv7zYP4A1puo5HIjEJ5EX48ighy4ZyKMG9EDXxBgW6e++cn7d1xuFghA==", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/filesize": { "version": "8.0.7", "resolved": "https://registry.npmjs.org/filesize/-/filesize-8.0.7.tgz", @@ -7060,6 +7426,33 @@ "node": ">=6" } }, + "node_modules/form-data": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz", + "integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==", + "dependencies": { + "asynckit": "^0.4.0", + "combined-stream": "^1.0.8", + "mime-types": "^2.1.12" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/formidable": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/formidable/-/formidable-2.1.2.tgz", + "integrity": "sha512-CM3GuJ57US06mlpQ47YcunuUZ9jpm8Vx+P2CGt2j7HpgkKZO/DJYQ0Bobim8G6PFQmK5lOqOOdUXboU+h73A4g==", + "dependencies": { + "dezalgo": "^1.0.4", + "hexoid": "^1.0.0", + "once": "^1.4.0", + "qs": "^6.11.0" + }, + "funding": { + "url": "https://ko-fi.com/tunnckoCore/commissions" + } + }, "node_modules/forwarded": { "version": "0.2.0", "resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.2.0.tgz", @@ -7125,9 +7518,12 @@ } }, "node_modules/function-bind": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.1.tgz", - "integrity": "sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==" + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } }, "node_modules/gensync": { "version": "1.0.0-beta.2", @@ -7137,14 +7533,27 @@ "node": ">=6.9.0" } }, + "node_modules/get-caller-file": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", + "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", + "engines": { + "node": "6.* || 8.* || >= 10.*" + } + }, "node_modules/get-intrinsic": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.2.0.tgz", - "integrity": "sha512-L049y6nFOuom5wGyRc3/gdTLO94dySVKRACj1RmJZBQXlbTMhtNIgkWkUHq+jYmZvKf14EW1EoJnnjbmoHij0Q==", + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.2.4.tgz", + "integrity": "sha512-5uYhsJH8VJBTv7oslg4BznJYhDoRI6waYCxMmCdnTrcCrHA/fCFKoTFz2JKKE0HdDFUF7/oQuhzumXJK7paBRQ==", "dependencies": { - "function-bind": "^1.1.1", - "has": "^1.0.3", - "has-symbols": "^1.0.3" + "es-errors": "^1.3.0", + "function-bind": "^1.1.2", + "has-proto": "^1.0.1", + "has-symbols": "^1.0.3", + "hasown": "^2.0.0" + }, + "engines": { + "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" @@ -7155,6 +7564,14 @@ "resolved": "https://registry.npmjs.org/get-own-enumerable-property-symbols/-/get-own-enumerable-property-symbols-3.0.2.tgz", "integrity": "sha512-I0UBV/XOz1XkIJHEUDMZAbzCThU/H8DxmSfmdGcKPnVhu2VfFqr34jr9777IyaTYvxjedWhqVIilEDsCdP5G6g==" }, + "node_modules/get-stdin": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/get-stdin/-/get-stdin-5.0.1.tgz", + "integrity": "sha512-jZV7n6jGE3Gt7fgSTJoz91Ak5MuTLwMwkoYdjxuJ/AmjIsE1UC03y/IWkZCQGEvVNS9qoRNwy5BCqxImv0FVeA==", + "engines": { + "node": ">=0.12.0" + } + }, "node_modules/get-stream": { "version": "4.1.0", "resolved": "https://registry.npmjs.org/get-stream/-/get-stream-4.1.0.tgz", @@ -7317,6 +7734,17 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/gopd": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/gopd/-/gopd-1.0.1.tgz", + "integrity": "sha512-d65bNlIadxvpb/A2abVdlqKqV563juRnZ1Wtk6s1sIR8uNsXR70xqIzVqxVf1eTqDunwT2MkczEeaezCKTZhwA==", + "dependencies": { + "get-intrinsic": "^1.1.3" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, "node_modules/got": { "version": "9.6.0", "resolved": "https://registry.npmjs.org/got/-/got-9.6.0.tgz", @@ -7343,6 +7771,14 @@ "resolved": "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz", "integrity": "sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ==" }, + "node_modules/graphlib": { + "version": "2.1.8", + "resolved": "https://registry.npmjs.org/graphlib/-/graphlib-2.1.8.tgz", + "integrity": "sha512-jcLLfkpoVGmH7/InMC/1hIvOPSUh38oJtGhvrOFGzioE1DZ+0YW16RgmOJhHiuWTvGiJQ9Z1Ik43JvkRPRvE+A==", + "dependencies": { + "lodash": "^4.17.15" + } + }, "node_modules/gray-matter": { "version": "4.0.3", "resolved": "https://registry.npmjs.org/gray-matter/-/gray-matter-4.0.3.tgz", @@ -7416,20 +7852,20 @@ } }, "node_modules/has-property-descriptors": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/has-property-descriptors/-/has-property-descriptors-1.0.0.tgz", - "integrity": "sha512-62DVLZGoiEBDHQyqG4w9xCuZ7eJEwNmJRWw2VY84Oedb7WFcA27fiEVe8oUQx9hAUJ4ekurquucTGwsyO1XGdQ==", + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/has-property-descriptors/-/has-property-descriptors-1.0.2.tgz", + "integrity": "sha512-55JNKuIW+vq4Ke1BjOTjM2YctQIvCT7GFzHwmfZPGo5wnrgkid0YQtnAleFSqumZm4az3n2BS+erby5ipJdgrg==", "dependencies": { - "get-intrinsic": "^1.1.1" + "es-define-property": "^1.0.0" }, "funding": { "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/has-symbols": { + "node_modules/has-proto": { "version": "1.0.3", - "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.3.tgz", - "integrity": "sha512-l3LCuF6MgDNwTDKkdYGEihYjt5pRPbEg46rtlmnSPlUbgmB8LOIrKJbYYFBSbnPaJexMKtiPO8hmeRjRz2Td+A==", + "resolved": "https://registry.npmjs.org/has-proto/-/has-proto-1.0.3.tgz", + "integrity": "sha512-SJ1amZAJUiZS+PhsVLf5tGydlaVB8EdFpaSO4gmiUKUOxk8qzn5AIy4ZeJUmh22znIdk/uMAUT2pl3FxzVUH+Q==", "engines": { "node": ">= 0.4" }, @@ -7437,14 +7873,36 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/has-yarn": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/has-yarn/-/has-yarn-2.1.0.tgz", - "integrity": "sha512-UqBRqi4ju7T+TqGNdqAO0PaSVGsDGJUBQvk9eUWNGRY1CFGDzYhLWoM7JQEemnlvVcv/YEmc2wNW8BC24EnUsw==", + "node_modules/has-symbols": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.3.tgz", + "integrity": "sha512-l3LCuF6MgDNwTDKkdYGEihYjt5pRPbEg46rtlmnSPlUbgmB8LOIrKJbYYFBSbnPaJexMKtiPO8hmeRjRz2Td+A==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/has-yarn": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/has-yarn/-/has-yarn-2.1.0.tgz", + "integrity": "sha512-UqBRqi4ju7T+TqGNdqAO0PaSVGsDGJUBQvk9eUWNGRY1CFGDzYhLWoM7JQEemnlvVcv/YEmc2wNW8BC24EnUsw==", "engines": { "node": ">=8" } }, + "node_modules/hasown": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.1.tgz", + "integrity": "sha512-1/th4MHjnwncwXsIW6QMzlvYL9kG5e/CpVvLRZe4XPa8TOUNbCELqmvhDmnkNsAjwaG4+I8gJJL0JBvTTLO9qA==", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, "node_modules/hast-to-hyperscript": { "version": "9.0.1", "resolved": "https://registry.npmjs.org/hast-to-hyperscript/-/hast-to-hyperscript-9.0.1.tgz", @@ -7555,6 +8013,14 @@ "he": "bin/he" } }, + "node_modules/hexoid": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/hexoid/-/hexoid-1.0.0.tgz", + "integrity": "sha512-QFLV0taWQOZtvIRIAdBChesmogZrtuXvVWsFHZTk2SU+anspqZ2vMnoLg7IE1+Uk16N19APic1BuF8bC8c2m5g==", + "engines": { + "node": ">=8" + } + }, "node_modules/history": { "version": "4.10.1", "resolved": "https://registry.npmjs.org/history/-/history-4.10.1.tgz", @@ -7789,6 +8255,16 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/http-reasons": { + "version": "0.1.0", + "resolved": "https://registry.npmjs.org/http-reasons/-/http-reasons-0.1.0.tgz", + "integrity": "sha512-P6kYh0lKZ+y29T2Gqz+RlC9WBLhKe8kDmcJ+A+611jFfxdPsbMRQ5aNmFRM3lENqFkK+HTTL+tlQviAiv0AbLQ==" + }, + "node_modules/http2-client": { + "version": "1.3.5", + "resolved": "https://registry.npmjs.org/http2-client/-/http2-client-1.3.5.tgz", + "integrity": "sha512-EC2utToWl4RKfs5zd36Mxq7nzHHBuomZboI0yYL6Y0RmBgT7Sgkq4rQ0ezFTYoIsSs7Tm9SJe+o2FcAg6GBhGA==" + }, "node_modules/human-signals": { "version": "2.1.0", "resolved": "https://registry.npmjs.org/human-signals/-/human-signals-2.1.0.tgz", @@ -7819,6 +8295,25 @@ "postcss": "^8.1.0" } }, + "node_modules/ieee754": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz", + "integrity": "sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ] + }, "node_modules/ignore": { "version": "5.2.4", "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.4.tgz", @@ -8376,6 +8871,67 @@ "resolved": "https://registry.npmjs.org/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz", "integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==" }, + "node_modules/json-refs": { + "version": "3.0.15", + "resolved": "https://registry.npmjs.org/json-refs/-/json-refs-3.0.15.tgz", + "integrity": "sha512-0vOQd9eLNBL18EGl5yYaO44GhixmImes2wiYn9Z3sag3QnehWrYWlB9AFtMxCL2Bj3fyxgDYkxGFEU/chlYssw==", + "dependencies": { + "commander": "~4.1.1", + "graphlib": "^2.1.8", + "js-yaml": "^3.13.1", + "lodash": "^4.17.15", + "native-promise-only": "^0.8.1", + "path-loader": "^1.0.10", + "slash": "^3.0.0", + "uri-js": "^4.2.2" + }, + "bin": { + "json-refs": "bin/json-refs" + }, + "engines": { + "node": ">=0.8" + } + }, + "node_modules/json-refs/node_modules/argparse": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz", + "integrity": "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==", + "dependencies": { + "sprintf-js": "~1.0.2" + } + }, + "node_modules/json-refs/node_modules/commander": { + "version": "4.1.1", + "resolved": "https://registry.npmjs.org/commander/-/commander-4.1.1.tgz", + "integrity": "sha512-NOKm8xhkzAjzFx8B2v5OAHT+u5pRQc2UCa2Vq9jYL/31o2wi9mxBA7LIFs3sV5VSC49z6pEhfbMULvShKj26WA==", + "engines": { + "node": ">= 6" + } + }, + "node_modules/json-refs/node_modules/js-yaml": { + "version": "3.14.1", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.14.1.tgz", + "integrity": "sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==", + "dependencies": { + "argparse": "^1.0.7", + "esprima": "^4.0.0" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, + "node_modules/json-schema-resolve-allof": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/json-schema-resolve-allof/-/json-schema-resolve-allof-1.5.0.tgz", + "integrity": "sha512-Jgn6BQGSLDp3D7bTYrmCbP/p7SRFz5BfpeEJ9A7sXuVADMc14aaDN1a49zqk9D26wwJlcNvjRpT63cz1VgFZeg==", + "dependencies": { + "get-stdin": "^5.0.1", + "lodash": "^4.14.0" + }, + "bin": { + "json-schema-resolve-allof": "bin/json-schema-resolve-allof" + } + }, "node_modules/json-schema-traverse": { "version": "0.4.1", "resolved": "https://registry.npmjs.org/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", @@ -8476,6 +9032,14 @@ "resolved": "https://registry.npmjs.org/lines-and-columns/-/lines-and-columns-1.2.4.tgz", "integrity": "sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==" }, + "node_modules/liquid-json": { + "version": "0.3.1", + "resolved": "https://registry.npmjs.org/liquid-json/-/liquid-json-0.3.1.tgz", + "integrity": "sha512-wUayTU8MS827Dam6MxgD72Ui+KOSF+u/eIqpatOtjnvgJ0+mnDq33uC2M7J0tPK+upe/DpUAuK4JUU89iBoNKQ==", + "engines": { + "node": ">=4" + } + }, "node_modules/loader-runner": { "version": "4.3.0", "resolved": "https://registry.npmjs.org/loader-runner/-/loader-runner-4.3.0.tgz", @@ -8513,6 +9077,11 @@ "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz", "integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==" }, + "node_modules/lodash.clonedeep": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/lodash.clonedeep/-/lodash.clonedeep-4.5.0.tgz", + "integrity": "sha512-H5ZhCF25riFd9uB5UCkVKo61m3S/xZk1x4wA6yp/L3RFP6Z/eHH1ymQcGLo7J3GMPfm0V/7m1tryHuGVxpqEBQ==" + }, "node_modules/lodash.curry": { "version": "4.1.1", "resolved": "https://registry.npmjs.org/lodash.curry/-/lodash.curry-4.1.1.tgz", @@ -8523,16 +9092,36 @@ "resolved": "https://registry.npmjs.org/lodash.debounce/-/lodash.debounce-4.0.8.tgz", "integrity": "sha512-FT1yDzDYEoYWhnSGnpE/4Kj1fLZkDFyqRb7fNt6FdYOSxlUWAtp42Eh6Wb0rGIv/m9Bgo7x4GhQbm5Ys4SG5ow==" }, + "node_modules/lodash.escaperegexp": { + "version": "4.1.2", + "resolved": "https://registry.npmjs.org/lodash.escaperegexp/-/lodash.escaperegexp-4.1.2.tgz", + "integrity": "sha512-TM9YBvyC84ZxE3rgfefxUWiQKLilstD6k7PTGt6wfbtXF8ixIJLOL3VYyV/z+ZiPLsVxAsKAFVwWlWeb2Y8Yyw==" + }, "node_modules/lodash.flow": { "version": "3.5.0", "resolved": "https://registry.npmjs.org/lodash.flow/-/lodash.flow-3.5.0.tgz", "integrity": "sha512-ff3BX/tSioo+XojX4MOsOMhJw0nZoUEF011LX8g8d3gvjVbxd89cCio4BCXronjxcTUIJUoqKEUA+n4CqvvRPw==" }, + "node_modules/lodash.isplainobject": { + "version": "4.0.6", + "resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz", + "integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==" + }, + "node_modules/lodash.isstring": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz", + "integrity": "sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==" + }, "node_modules/lodash.memoize": { "version": "4.1.2", "resolved": "https://registry.npmjs.org/lodash.memoize/-/lodash.memoize-4.1.2.tgz", "integrity": "sha512-t7j+NzmgnQzTAYXcsHYLgimltOV1MXHtlOWf6GjL9Kj8GK5FInw5JotxvbOs+IvV1/Dzo04/fCGfLVs7aXb4Ag==" }, + "node_modules/lodash.mergewith": { + "version": "4.6.2", + "resolved": "https://registry.npmjs.org/lodash.mergewith/-/lodash.mergewith-4.6.2.tgz", + "integrity": "sha512-GK3g5RPZWTRSeLSpgP8Xhra+pnjBC56q9FZYe1d5RN3TJ35dbkGy3YqBSMbyCrlbi+CM9Z3Jk5yTL7RCsqboyQ==" + }, "node_modules/lodash.uniq": { "version": "4.5.0", "resolved": "https://registry.npmjs.org/lodash.uniq/-/lodash.uniq-4.5.0.tgz", @@ -8604,6 +9193,17 @@ "url": "https://github.com/sponsors/wooorm" } }, + "node_modules/marked": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/marked/-/marked-1.1.1.tgz", + "integrity": "sha512-mJzT8D2yPxoPh7h0UXkB+dBj4FykPJ2OIfxAWeIHrvoHDkFxukV/29QxoFQoPM6RLEwhIFdJpmKBlqVM3s2ZIw==", + "bin": { + "marked": "bin/marked" + }, + "engines": { + "node": ">= 8.16.2" + } + }, "node_modules/mdast-squeeze-paragraphs": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/mdast-squeeze-paragraphs/-/mdast-squeeze-paragraphs-4.0.0.tgz", @@ -8742,6 +9342,14 @@ "node": ">= 0.6" } }, + "node_modules/mime-format": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/mime-format/-/mime-format-2.0.1.tgz", + "integrity": "sha512-XxU3ngPbEnrYnNbIX+lYSaYg0M01v6p2ntd2YaFksTu0vayaw5OJvbdRyWs07EYRlLED5qadUZ+xo+XhOvFhwg==", + "dependencies": { + "charset": "^1.0.0" + } + }, "node_modules/mime-types": { "version": "2.1.18", "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.18.tgz", @@ -8869,6 +9477,11 @@ "node": ">=8" } }, + "node_modules/monaco-editor": { + "version": "0.31.1", + "resolved": "https://registry.npmjs.org/monaco-editor/-/monaco-editor-0.31.1.tgz", + "integrity": "sha512-FYPwxGZAeP6mRRyrr5XTGHD9gRXVjy7GUzF4IPChnyt3fS5WrNxIkS8DNujWf6EQy0Zlzpxw8oTVE+mWI2/D1Q==" + }, "node_modules/mrmime": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/mrmime/-/mrmime-1.0.1.tgz", @@ -8911,6 +9524,11 @@ "node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1" } }, + "node_modules/native-promise-only": { + "version": "0.8.1", + "resolved": "https://registry.npmjs.org/native-promise-only/-/native-promise-only-0.8.1.tgz", + "integrity": "sha512-zkVhZUA3y8mbz652WrL5x0fB0ehrBkulWT3TomAQ9iDtyXZvzKeEA6GPxAItBYeNYl5yngKRX612qHOhvMkDeg==" + }, "node_modules/negotiator": { "version": "0.6.3", "resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz", @@ -8960,6 +9578,17 @@ } } }, + "node_modules/node-fetch-h2": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/node-fetch-h2/-/node-fetch-h2-2.3.0.tgz", + "integrity": "sha512-ofRW94Ab0T4AOh5Fk8t0h8OBWrmjb0SSB20xh1H8YnPV9EJ+f5AMoYSUQ2zgJ4Iq2HAK0I2l5/Nequ8YzFS3Hg==", + "dependencies": { + "http2-client": "^1.2.5" + }, + "engines": { + "node": "4.x || >=6.0.0" + } + }, "node_modules/node-forge": { "version": "1.3.1", "resolved": "https://registry.npmjs.org/node-forge/-/node-forge-1.3.1.tgz", @@ -9027,6 +9656,41 @@ "url": "https://github.com/fb55/nth-check?sponsor=1" } }, + "node_modules/number-is-nan": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/number-is-nan/-/number-is-nan-1.0.1.tgz", + "integrity": "sha512-4jbtZXNAsfZbAHiiqjLPBiCl16dES1zI4Hpzzxw61Tk+loF+sBDBKx1ICKKKwIqQ7M0mFn1TmkN7euSncWgHiQ==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/oas-kit-common": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/oas-kit-common/-/oas-kit-common-1.0.8.tgz", + "integrity": "sha512-pJTS2+T0oGIwgjGpw7sIRU8RQMcUoKCDWFLdBqKB2BNmGpbBMH2sdqAaOXUg8OzonZHU0L7vfJu1mJFEiYDWOQ==", + "dependencies": { + "fast-safe-stringify": "^2.0.7" + } + }, + "node_modules/oas-resolver-browser": { + "version": "2.3.3", + "resolved": "https://registry.npmjs.org/oas-resolver-browser/-/oas-resolver-browser-2.3.3.tgz", + "integrity": "sha512-KvggQ6xU7WlUWRYZKEktR90zJtNCHi1wbTAZuUX6oSfmBSdZo/b26rzfg3w2AdPVwQPRXMga6tqLW3OhbUF0Qg==", + "dependencies": { + "node-fetch-h2": "^2.3.0", + "oas-kit-common": "^1.0.8", + "path-browserify": "^1.0.1", + "reftools": "^1.1.1", + "yaml": "^1.8.3", + "yargs": "^15.3.1" + }, + "bin": { + "resolve": "resolve.js" + }, + "funding": { + "url": "https://github.com/Mermade/oas-kit?sponsor=1" + } + }, "node_modules/object-assign": { "version": "4.1.1", "resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz", @@ -9036,9 +9700,9 @@ } }, "node_modules/object-inspect": { - "version": "1.12.3", - "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.12.3.tgz", - "integrity": "sha512-geUvdk7c+eizMNUDkRpW1wJwgfOiOeHbxBR/hLXK1aT6zmVSO0jsQcs7fj6MGw89jC/cjGfLcNOrtMYtGqm81g==", + "version": "1.13.1", + "resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.1.tgz", + "integrity": "sha512-5qoj1RUiKOMsCCNLV1CBiPYE10sziTsnmNxkAI/rZhiD63CF7IqdFGC/XzjWjpSgLf0LxXX3bDFIh0E18f6UhQ==", "funding": { "url": "https://github.com/sponsors/ljharb" } @@ -9130,6 +9794,180 @@ "url": "https://github.com/sponsors/sindresorhus" } }, + "node_modules/openapi-to-postmanv2": { + "version": "1.2.7", + "resolved": "https://registry.npmjs.org/openapi-to-postmanv2/-/openapi-to-postmanv2-1.2.7.tgz", + "integrity": "sha512-oG3PZfAAljy5ebot8DZGLFDNNmDZ/qWqI/dboWlgg5hRj6dSSrXeiyXL6VQpcGDalxVX4jSChufOq2eDsFXp4w==", + "dependencies": { + "ajv": "6.12.3", + "async": "3.2.0", + "commander": "2.20.3", + "js-yaml": "3.13.1", + "lodash": "4.17.20", + "oas-resolver-browser": "2.3.3", + "path-browserify": "1.0.1", + "postman-collection": "3.6.6", + "yaml": "1.8.3" + }, + "bin": { + "openapi2postmanv2": "bin/openapi2postmanv2.js" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/ajv": { + "version": "6.12.3", + "resolved": "https://registry.npmjs.org/ajv/-/ajv-6.12.3.tgz", + "integrity": "sha512-4K0cK3L1hsqk9xIb2z9vs/XU+PGJZ9PNpJRDS9YLzmNdX6jmVPfamLvTJr0aDAusnHyCHO6MjzlkAsgtqp9teA==", + "dependencies": { + "fast-deep-equal": "^3.1.1", + "fast-json-stable-stringify": "^2.0.0", + "json-schema-traverse": "^0.4.1", + "uri-js": "^4.2.2" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/epoberezkin" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/argparse": { + "version": "1.0.10", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz", + "integrity": "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==", + "dependencies": { + "sprintf-js": "~1.0.2" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/commander": { + "version": "2.20.3", + "resolved": "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz", + "integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==" + }, + "node_modules/openapi-to-postmanv2/node_modules/iconv-lite": { + "version": "0.6.2", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.2.tgz", + "integrity": "sha512-2y91h5OpQlolefMPmUlivelittSWy0rP+oYVpn6A7GwVHNE8AWzoYOBNmlwks3LobaJxgHCYZAnyNo2GgpNRNQ==", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/js-yaml": { + "version": "3.13.1", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-3.13.1.tgz", + "integrity": "sha512-YfbcO7jXDdyj0DGxYVSlSeQNHbD7XPWvrVWeVUujrQEoZzWJIRrCPoyk6kL6IAjAG2IolMK4T0hNUe0HOUs5Jw==", + "dependencies": { + "argparse": "^1.0.7", + "esprima": "^4.0.0" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/lodash": { + "version": "4.17.20", + "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz", + "integrity": "sha512-PlhdFcillOINfeV7Ni6oF1TAEayyZBoZ8bcshTHqOYJYlrqzRK5hagpagky5o4HfCzzd1TRkXPMFq6cKk9rGmA==" + }, + "node_modules/openapi-to-postmanv2/node_modules/mime-db": { + "version": "1.44.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.44.0.tgz", + "integrity": "sha512-/NOTfLrsPBVeH7YtFPgsVWveuL+4SjjYxaQ1xtM1KMFj7HdxlBlxeyNLzhyJVx7r4rZGJAZ/6lkKCitSc/Nmpg==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/mime-format": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/mime-format/-/mime-format-2.0.0.tgz", + "integrity": "sha512-sv1KDeJFutfXbT+MpIuExruuVZ7LSNQVHIxf7IZVr0a/qWKcHY8DHklWoO6CWf7QnGLl0eC8vBEghl5paWSqqg==", + "dependencies": { + "charset": "^1.0.0" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/mime-types": { + "version": "2.1.27", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.27.tgz", + "integrity": "sha512-JIhqnCasI9yD+SsmkquHBxTSEuZdQX5BuQnS2Vc7puQQQ+8yiP5AY5uWhpdv4YL4VM5c6iliiYWPgJ/nJQLp7w==", + "dependencies": { + "mime-db": "1.44.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/postman-collection": { + "version": "3.6.6", + "resolved": "https://registry.npmjs.org/postman-collection/-/postman-collection-3.6.6.tgz", + "integrity": "sha512-fm9AGKHbL2coSzD5nw+F07JrX7jzqu2doGIXevPPrwlpTZyTM6yagEdENeO/Na8rSUrI1+tKPj+TgAFiLvtF4w==", + "dependencies": { + "escape-html": "1.0.3", + "faker": "5.1.0", + "file-type": "3.9.0", + "http-reasons": "0.1.0", + "iconv-lite": "0.6.2", + "liquid-json": "0.3.1", + "lodash": "4.17.20", + "marked": "1.1.1", + "mime-format": "2.0.0", + "mime-types": "2.1.27", + "postman-url-encoder": "2.1.3", + "sanitize-html": "1.20.1", + "semver": "7.3.2", + "uuid": "3.4.0" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/postman-url-encoder": { + "version": "2.1.3", + "resolved": "https://registry.npmjs.org/postman-url-encoder/-/postman-url-encoder-2.1.3.tgz", + "integrity": "sha512-CwQjnoxaugCGeOyzVeZ4k1cNQ6iS8OBCzuWzcf4kLStKeRp0MwmLKYv25frynmDpugUUimq/d+FZCq6GtIX9Ag==", + "dependencies": { + "postman-collection": "^3.6.4", + "punycode": "^2.1.1" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/punycode": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", + "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==", + "engines": { + "node": ">=6" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/semver": { + "version": "7.3.2", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.2.tgz", + "integrity": "sha512-OrOb32TeeambH6UrhtShmF7CRDqhL6/5XpPNp2DuRH6+9QLw/orhp72j87v8Qa1ScDkvrrBNpZcDejAirJmfXQ==", + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/uuid": { + "version": "3.4.0", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-3.4.0.tgz", + "integrity": "sha512-HjSDRw6gZE5JMggctHBcjVak08+KEVhSIiDzFnT9S9aegmp85S/bReBVTb4QTFaRNptJ9kuYaNhnbNEOkbKb/A==", + "deprecated": "Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.", + "bin": { + "uuid": "bin/uuid" + } + }, + "node_modules/openapi-to-postmanv2/node_modules/yaml": { + "version": "1.8.3", + "resolved": "https://registry.npmjs.org/yaml/-/yaml-1.8.3.tgz", + "integrity": "sha512-X/v7VDnK+sxbQ2Imq4Jt2PRUsRsP7UcpSl3Llg6+NRRqWLIvxkMFYtH1FmvwNGYRKKPa+EPA4qDBlI9WVG1UKw==", + "dependencies": { + "@babel/runtime": "^7.8.7" + }, + "engines": { + "node": ">= 6" + } + }, "node_modules/opener": { "version": "1.5.2", "resolved": "https://registry.npmjs.org/opener/-/opener-1.5.2.tgz", @@ -9326,6 +10164,20 @@ "tslib": "^2.0.3" } }, + "node_modules/path": { + "version": "0.12.7", + "resolved": "https://registry.npmjs.org/path/-/path-0.12.7.tgz", + "integrity": "sha512-aXXC6s+1w7otVF9UletFkFcDsJeO7lSZBPUQhtb5O0xJe8LtYhj/GxldoL09bBj9+ZmE2hNoHqQSFMN5fikh4Q==", + "dependencies": { + "process": "^0.11.1", + "util": "^0.10.3" + } + }, + "node_modules/path-browserify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/path-browserify/-/path-browserify-1.0.1.tgz", + "integrity": "sha512-b7uo2UCUOYZcnF/3ID0lulOJi/bafxa1xPe7ZPsammBSpjSWQkjNxlt635YGS2MiR9GjvuXCtz2emr3jbsz98g==" + }, "node_modules/path-exists": { "version": "4.0.0", "resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz", @@ -9355,6 +10207,15 @@ "node": ">=8" } }, + "node_modules/path-loader": { + "version": "1.0.12", + "resolved": "https://registry.npmjs.org/path-loader/-/path-loader-1.0.12.tgz", + "integrity": "sha512-n7oDG8B+k/p818uweWrOixY9/Dsr89o2TkCm6tOTex3fpdo2+BFDgR+KpB37mGKBRsBAlR8CIJMFN0OEy/7hIQ==", + "dependencies": { + "native-promise-only": "^0.8.1", + "superagent": "^7.1.6" + } + }, "node_modules/path-parse": { "version": "1.0.7", "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz", @@ -10162,6 +11023,232 @@ "postcss": "^8.2.15" } }, + "node_modules/postman-code-generators": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/postman-code-generators/-/postman-code-generators-1.9.0.tgz", + "integrity": "sha512-ZM4H7cU1dNUuMPw9CsEoQ7aONl/n8bpSEunZcvzyJd1WtLNj5ktGBGOlDtbTo773dZy5CiVrugdCdt0jhdnUOA==", + "hasInstallScript": true, + "dependencies": { + "async": "3.2.2", + "lodash": "4.17.21", + "path": "0.12.7", + "postman-collection": "4.0.0", + "shelljs": "0.8.5" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/postman-code-generators/node_modules/async": { + "version": "3.2.2", + "resolved": "https://registry.npmjs.org/async/-/async-3.2.2.tgz", + "integrity": "sha512-H0E+qZaDEfx/FY4t7iLRv1W2fFI6+pyCeTw1uN20AQPiwqwM6ojPxHxdLv4z8hi2DtnW9BOckSspLucW7pIE5g==" + }, + "node_modules/postman-code-generators/node_modules/faker": { + "version": "5.5.3", + "resolved": "https://registry.npmjs.org/faker/-/faker-5.5.3.tgz", + "integrity": "sha512-wLTv2a28wjUyWkbnX7u/ABZBkUkIF2fCd73V6P2oFqEGEktDfzWx4UxrSqtPRw0xPRAcjeAOIiJWqZm3pP4u3g==" + }, + "node_modules/postman-code-generators/node_modules/iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/postman-code-generators/node_modules/lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dependencies": { + "yallist": "^4.0.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/postman-code-generators/node_modules/mime-db": { + "version": "1.48.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.48.0.tgz", + "integrity": "sha512-FM3QwxV+TnZYQ2aRqhlKBMHxk10lTbMt3bBkMAp54ddrNeVSfcQYOOKuGuy3Ddrm38I04If834fOUSq1yzslJQ==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/postman-code-generators/node_modules/mime-types": { + "version": "2.1.31", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.31.tgz", + "integrity": "sha512-XGZnNzm3QvgKxa8dpzyhFTHmpP3l5YNusmne07VUOXxou9CqUqYa/HBy124RqtVh/O2pECas/MOcsDgpilPOPg==", + "dependencies": { + "mime-db": "1.48.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/postman-code-generators/node_modules/postman-collection": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/postman-collection/-/postman-collection-4.0.0.tgz", + "integrity": "sha512-vDrXG/dclSu6RMqPqBz4ZqoQBwcj/a80sJYsQZmzWJ6dWgXiudPhwu6Vm3C1Hy7zX5W8A6am1Z6vb/TB4eyURA==", + "dependencies": { + "faker": "5.5.3", + "file-type": "3.9.0", + "http-reasons": "0.1.0", + "iconv-lite": "0.6.3", + "liquid-json": "0.3.1", + "lodash": "4.17.21", + "mime-format": "2.0.1", + "mime-types": "2.1.31", + "postman-url-encoder": "3.0.1", + "semver": "7.3.5", + "uuid": "8.3.2" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/postman-code-generators/node_modules/postman-url-encoder": { + "version": "3.0.1", + "resolved": "https://registry.npmjs.org/postman-url-encoder/-/postman-url-encoder-3.0.1.tgz", + "integrity": "sha512-dMPqXnkDlstM2Eya+Gw4MIGWEan8TzldDcUKZIhZUsJ/G5JjubfQPhFhVWKzuATDMvwvrWbSjF+8VmAvbu6giw==", + "dependencies": { + "punycode": "^2.1.1" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/postman-code-generators/node_modules/punycode": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", + "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==", + "engines": { + "node": ">=6" + } + }, + "node_modules/postman-code-generators/node_modules/semver": { + "version": "7.3.5", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.3.5.tgz", + "integrity": "sha512-PoeGJYh8HK4BTO/a9Tf6ZG3veo/A7ZVsYrSA6J8ny9nb3B1VrpkuN+z9OE5wfE5p6H4LchYZsegiQgbJD94ZFQ==", + "dependencies": { + "lru-cache": "^6.0.0" + }, + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/postman-code-generators/node_modules/yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==" + }, + "node_modules/postman-collection": { + "version": "4.4.0", + "resolved": "https://registry.npmjs.org/postman-collection/-/postman-collection-4.4.0.tgz", + "integrity": "sha512-2BGDFcUwlK08CqZFUlIC8kwRJueVzPjZnnokWPtJCd9f2J06HBQpGL7t2P1Ud1NEsK9NHq9wdipUhWLOPj5s/Q==", + "dependencies": { + "@faker-js/faker": "5.5.3", + "file-type": "3.9.0", + "http-reasons": "0.1.0", + "iconv-lite": "0.6.3", + "liquid-json": "0.3.1", + "lodash": "4.17.21", + "mime-format": "2.0.1", + "mime-types": "2.1.35", + "postman-url-encoder": "3.0.5", + "semver": "7.5.4", + "uuid": "8.3.2" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/postman-collection/node_modules/iconv-lite": { + "version": "0.6.3", + "resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.6.3.tgz", + "integrity": "sha512-4fCk79wshMdzMp2rH06qWrJE4iolqLhCUH+OiuIgU++RB0+94NlDL81atO7GX55uUKueo0txHNtvEyI6D7WdMw==", + "dependencies": { + "safer-buffer": ">= 2.1.2 < 3.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/postman-collection/node_modules/lru-cache": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/lru-cache/-/lru-cache-6.0.0.tgz", + "integrity": "sha512-Jo6dJ04CmSjuznwJSS3pUeWmd/H0ffTlkXXgwZi+eq1UCmqQwCh+eLsYOYCwY991i2Fah4h1BEMCx4qThGbsiA==", + "dependencies": { + "yallist": "^4.0.0" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/postman-collection/node_modules/mime-db": { + "version": "1.52.0", + "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz", + "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==", + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/postman-collection/node_modules/mime-types": { + "version": "2.1.35", + "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz", + "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==", + "dependencies": { + "mime-db": "1.52.0" + }, + "engines": { + "node": ">= 0.6" + } + }, + "node_modules/postman-collection/node_modules/semver": { + "version": "7.5.4", + "resolved": "https://registry.npmjs.org/semver/-/semver-7.5.4.tgz", + "integrity": "sha512-1bCSESV6Pv+i21Hvpxp3Dx+pSD8lIPt8uVjRrxAUt/nbswYc+tK6Y2btiULjd4+fnq15PX+nqQDC7Oft7WkwcA==", + "dependencies": { + "lru-cache": "^6.0.0" + }, + "bin": { + "semver": "bin/semver.js" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/postman-collection/node_modules/yallist": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/yallist/-/yallist-4.0.0.tgz", + "integrity": "sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==" + }, + "node_modules/postman-url-encoder": { + "version": "3.0.5", + "resolved": "https://registry.npmjs.org/postman-url-encoder/-/postman-url-encoder-3.0.5.tgz", + "integrity": "sha512-jOrdVvzUXBC7C+9gkIkpDJ3HIxOHTIqjpQ4C1EMt1ZGeMvSEpbFCKq23DEfgsj46vMnDgyQf+1ZLp2Wm+bKSsA==", + "dependencies": { + "punycode": "^2.1.1" + }, + "engines": { + "node": ">=10" + } + }, + "node_modules/postman-url-encoder/node_modules/punycode": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz", + "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==", + "engines": { + "node": ">=6" + } + }, "node_modules/prepend-http": { "version": "2.0.0", "resolved": "https://registry.npmjs.org/prepend-http/-/prepend-http-2.0.0.tgz", @@ -10203,6 +11290,14 @@ "node": ">=6" } }, + "node_modules/process": { + "version": "0.11.10", + "resolved": "https://registry.npmjs.org/process/-/process-0.11.10.tgz", + "integrity": "sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A==", + "engines": { + "node": ">= 0.6.0" + } + }, "node_modules/process-nextick-args": { "version": "2.0.1", "resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz", @@ -10623,6 +11718,40 @@ "webpack": ">=4.41.1 || 5.x" } }, + "node_modules/react-magic-dropzone": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/react-magic-dropzone/-/react-magic-dropzone-1.0.1.tgz", + "integrity": "sha512-0BIROPARmXHpk4AS3eWBOsewxoM5ndk2psYP/JmbCq8tz3uR2LIV1XiroZ9PKrmDRMctpW+TvsBCtWasuS8vFA==" + }, + "node_modules/react-redux": { + "version": "7.2.9", + "resolved": "https://registry.npmjs.org/react-redux/-/react-redux-7.2.9.tgz", + "integrity": "sha512-Gx4L3uM182jEEayZfRbI/G11ZpYdNAnBs70lFVMNdHJI76XYtR+7m0MN+eAs7UHBPhWXcnFPaS+9owSCJQHNpQ==", + "dependencies": { + "@babel/runtime": "^7.15.4", + "@types/react-redux": "^7.1.20", + "hoist-non-react-statics": "^3.3.2", + "loose-envify": "^1.4.0", + "prop-types": "^15.7.2", + "react-is": "^17.0.2" + }, + "peerDependencies": { + "react": "^16.8.3 || ^17 || ^18" + }, + "peerDependenciesMeta": { + "react-dom": { + "optional": true + }, + "react-native": { + "optional": true + } + } + }, + "node_modules/react-redux/node_modules/react-is": { + "version": "17.0.2", + "resolved": "https://registry.npmjs.org/react-is/-/react-is-17.0.2.tgz", + "integrity": "sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w==" + }, "node_modules/react-router": { "version": "5.3.4", "resolved": "https://registry.npmjs.org/react-router/-/react-router-5.3.4.tgz", @@ -10738,6 +11867,39 @@ "node": ">=6.0.0" } }, + "node_modules/redux": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/redux/-/redux-4.2.1.tgz", + "integrity": "sha512-LAUYz4lc+Do8/g7aeRa8JkyDErK6ekstQaqWQrNRW//MY1TvCEpMtpTWvlQ+FPbWCx+Xixu/6SHt5N0HR+SB4w==", + "dependencies": { + "@babel/runtime": "^7.9.2" + } + }, + "node_modules/redux-devtools-extension": { + "version": "2.13.9", + "resolved": "https://registry.npmjs.org/redux-devtools-extension/-/redux-devtools-extension-2.13.9.tgz", + "integrity": "sha512-cNJ8Q/EtjhQaZ71c8I9+BPySIBVEKssbPpskBfsXqb8HJ002A3KRVHfeRzwRo6mGPqsm7XuHTqNSNeS1Khig0A==", + "deprecated": "Package moved to @redux-devtools/extension.", + "peerDependencies": { + "redux": "^3.1.0 || ^4.0.0" + } + }, + "node_modules/redux-thunk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/redux-thunk/-/redux-thunk-2.4.2.tgz", + "integrity": "sha512-+P3TjtnP0k/FEjcBL5FZpoovtvrTNT/UXd4/sluaSyrURlSlhLSzEdfsTBW7WsKB6yPvgd7q/iZPICFjW4o57Q==", + "peerDependencies": { + "redux": "^4" + } + }, + "node_modules/reftools": { + "version": "1.1.9", + "resolved": "https://registry.npmjs.org/reftools/-/reftools-1.1.9.tgz", + "integrity": "sha512-OVede/NQE13xBQ+ob5CKd5KyeJYU2YInb1bmV4nRoOfquZPkAkxuOXicSe1PvqIuZZ4kD13sPKBbR7UFDmli6w==", + "funding": { + "url": "https://github.com/Mermade/oas-kit?sponsor=1" + } + }, "node_modules/regenerate": { "version": "1.4.2", "resolved": "https://registry.npmjs.org/regenerate/-/regenerate-1.4.2.tgz", @@ -10816,13 +11978,63 @@ "regjsparser": "bin/parser" } }, - "node_modules/regjsparser/node_modules/jsesc": { - "version": "0.5.0", - "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-0.5.0.tgz", - "integrity": "sha512-uZz5UnB7u4T9LvwmFqXii7pZSouaRPorGs5who1Ip7VO0wxanFvBL7GkM6dTHlgX+jhBApRetaWpnDabOeTcnA==", - "bin": { - "jsesc": "bin/jsesc" - } + "node_modules/regjsparser/node_modules/jsesc": { + "version": "0.5.0", + "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-0.5.0.tgz", + "integrity": "sha512-uZz5UnB7u4T9LvwmFqXii7pZSouaRPorGs5who1Ip7VO0wxanFvBL7GkM6dTHlgX+jhBApRetaWpnDabOeTcnA==", + "bin": { + "jsesc": "bin/jsesc" + } + }, + "node_modules/rehype-parse": { + "version": "6.0.2", + "resolved": "https://registry.npmjs.org/rehype-parse/-/rehype-parse-6.0.2.tgz", + "integrity": "sha512-0S3CpvpTAgGmnz8kiCyFLGuW5yA4OQhyNTm/nwPopZ7+PI11WnGl1TTWTGv/2hPEe/g2jRLlhVVSsoDH8waRug==", + "dependencies": { + "hast-util-from-parse5": "^5.0.0", + "parse5": "^5.0.0", + "xtend": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/rehype-parse/node_modules/hast-util-from-parse5": { + "version": "5.0.3", + "resolved": "https://registry.npmjs.org/hast-util-from-parse5/-/hast-util-from-parse5-5.0.3.tgz", + "integrity": "sha512-gOc8UB99F6eWVWFtM9jUikjN7QkWxB3nY0df5Z0Zq1/Nkwl5V4hAAsl0tmwlgWl/1shlTF8DnNYLO8X6wRV9pA==", + "dependencies": { + "ccount": "^1.0.3", + "hastscript": "^5.0.0", + "property-information": "^5.0.0", + "web-namespaces": "^1.1.2", + "xtend": "^4.0.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/rehype-parse/node_modules/hastscript": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/hastscript/-/hastscript-5.1.2.tgz", + "integrity": "sha512-WlztFuK+Lrvi3EggsqOkQ52rKbxkXL3RwB6t5lwoa8QLMemoWfBuL43eDrwOamJyR7uKQKdmKYaBH1NZBiIRrQ==", + "dependencies": { + "comma-separated-tokens": "^1.0.0", + "hast-util-parse-selector": "^2.0.0", + "property-information": "^5.0.0", + "space-separated-tokens": "^1.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/rehype-parse/node_modules/parse5": { + "version": "5.1.1", + "resolved": "https://registry.npmjs.org/parse5/-/parse5-5.1.1.tgz", + "integrity": "sha512-ugq4DFI0Ptb+WWjAdOK16+u/nHfiIrcE+sh8kZMaM0WllQKLI9rOUq6c2b7cwPkXdzfQESqvoqK6ug7U/Yyzug==" }, "node_modules/relateurl": { "version": "0.2.7", @@ -10832,6 +12044,32 @@ "node": ">= 0.10" } }, + "node_modules/remark-admonitions": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/remark-admonitions/-/remark-admonitions-1.2.1.tgz", + "integrity": "sha512-Ji6p68VDvD+H1oS95Fdx9Ar5WA2wcDA4kwrrhVU7fGctC6+d3uiMICu7w7/2Xld+lnU7/gi+432+rRbup5S8ow==", + "dependencies": { + "rehype-parse": "^6.0.2", + "unified": "^8.4.2", + "unist-util-visit": "^2.0.1" + } + }, + "node_modules/remark-admonitions/node_modules/unified": { + "version": "8.4.2", + "resolved": "https://registry.npmjs.org/unified/-/unified-8.4.2.tgz", + "integrity": "sha512-JCrmN13jI4+h9UAyKEoGcDZV+i1E7BLFuG7OsaDvTXI5P0qhHX+vZO/kOhz9jn8HGENDKbwSeB0nVOg4gVStGA==", + "dependencies": { + "bail": "^1.0.0", + "extend": "^3.0.0", + "is-plain-obj": "^2.0.0", + "trough": "^1.0.0", + "vfile": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, "node_modules/remark-emoji": { "version": "2.2.0", "resolved": "https://registry.npmjs.org/remark-emoji/-/remark-emoji-2.2.0.tgz", @@ -11102,6 +12340,14 @@ "node": ">=0.10" } }, + "node_modules/require-directory": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", + "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==", + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/require-from-string": { "version": "2.0.2", "resolved": "https://registry.npmjs.org/require-from-string/-/require-from-string-2.0.2.tgz", @@ -11118,11 +12364,21 @@ "node": "*" } }, + "node_modules/require-main-filename": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/require-main-filename/-/require-main-filename-2.0.0.tgz", + "integrity": "sha512-NKN5kMDylKuldxYLSUfrbo5Tuzh4hd+2E8NPPX02mZtn1VuREQToYe/ZdlJy+J3uCpfaiGF05e7B8W0iXbQHmg==" + }, "node_modules/requires-port": { "version": "1.0.0", "resolved": "https://registry.npmjs.org/requires-port/-/requires-port-1.0.0.tgz", "integrity": "sha512-KigOCHcocU3XODJxsu8i/j8T9tzT4adHiecwORRQ0ZZFcp7ahwXuRU1m+yuO90C5ZUyGeGfocHDI14M3L3yDAQ==" }, + "node_modules/reselect": { + "version": "4.1.8", + "resolved": "https://registry.npmjs.org/reselect/-/reselect-4.1.8.tgz", + "integrity": "sha512-ab9EmR80F/zQTMNeneUr4cv+jSwPJgIlvEmVwLerwrWVbpLlBuls9XHzIeTFy4cegU2NHBp3va0LKOzU5qFEYQ==" + }, "node_modules/resolve": { "version": "1.22.1", "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.22.1.tgz", @@ -11340,6 +12596,176 @@ "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==" }, + "node_modules/sanitize-html": { + "version": "1.20.1", + "resolved": "https://registry.npmjs.org/sanitize-html/-/sanitize-html-1.20.1.tgz", + "integrity": "sha512-txnH8TQjaQvg2Q0HY06G6CDJLVYCpbnxrdO0WN8gjCKaU5J0KbyGYhZxx5QJg3WLZ1lB7XU9kDkfrCXUozqptA==", + "dependencies": { + "chalk": "^2.4.1", + "htmlparser2": "^3.10.0", + "lodash.clonedeep": "^4.5.0", + "lodash.escaperegexp": "^4.1.2", + "lodash.isplainobject": "^4.0.6", + "lodash.isstring": "^4.0.1", + "lodash.mergewith": "^4.6.1", + "postcss": "^7.0.5", + "srcset": "^1.0.0", + "xtend": "^4.0.1" + } + }, + "node_modules/sanitize-html/node_modules/ansi-styles": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", + "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dependencies": { + "color-convert": "^1.9.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/sanitize-html/node_modules/chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dependencies": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/sanitize-html/node_modules/color-convert": { + "version": "1.9.3", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", + "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "dependencies": { + "color-name": "1.1.3" + } + }, + "node_modules/sanitize-html/node_modules/color-name": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", + "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==" + }, + "node_modules/sanitize-html/node_modules/dom-serializer": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/dom-serializer/-/dom-serializer-0.2.2.tgz", + "integrity": "sha512-2/xPb3ORsQ42nHYiSunXkDjPLBaEj/xTwUO4B7XCZQTRk7EBtTOPaygh10YAAh2OI1Qrp6NWfpAhzswj0ydt9g==", + "dependencies": { + "domelementtype": "^2.0.1", + "entities": "^2.0.0" + } + }, + "node_modules/sanitize-html/node_modules/dom-serializer/node_modules/domelementtype": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/domelementtype/-/domelementtype-2.3.0.tgz", + "integrity": "sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/fb55" + } + ] + }, + "node_modules/sanitize-html/node_modules/dom-serializer/node_modules/entities": { + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-2.2.0.tgz", + "integrity": "sha512-p92if5Nz619I0w+akJrLZH0MX0Pb5DX39XOwQTtXSdQQOaYH03S1uIQp4mhOZtAXrxq4ViO67YTiLBo2638o9A==", + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/sanitize-html/node_modules/domelementtype": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/domelementtype/-/domelementtype-1.3.1.tgz", + "integrity": "sha512-BSKB+TSpMpFI/HOxCNr1O8aMOTZ8hT3pM3GQ0w/mWRmkhEDSFJkkyzz4XQsBV44BChwGkrDfMyjVD0eA2aFV3w==" + }, + "node_modules/sanitize-html/node_modules/domhandler": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/domhandler/-/domhandler-2.4.2.tgz", + "integrity": "sha512-JiK04h0Ht5u/80fdLMCEmV4zkNh2BcoMFBmZ/91WtYZ8qVXSKjiw7fXMgFPnHcSZgOo3XdinHvmnDUeMf5R4wA==", + "dependencies": { + "domelementtype": "1" + } + }, + "node_modules/sanitize-html/node_modules/domutils": { + "version": "1.7.0", + "resolved": "https://registry.npmjs.org/domutils/-/domutils-1.7.0.tgz", + "integrity": "sha512-Lgd2XcJ/NjEw+7tFvfKxOzCYKZsdct5lczQ2ZaQY8Djz7pfAD3Gbp8ySJWtreII/vDlMVmxwa6pHmdxIYgttDg==", + "dependencies": { + "dom-serializer": "0", + "domelementtype": "1" + } + }, + "node_modules/sanitize-html/node_modules/entities": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/entities/-/entities-1.1.2.tgz", + "integrity": "sha512-f2LZMYl1Fzu7YSBKg+RoROelpOaNrcGmE9AZubeDfrCEia483oW4MI4VyFd5VNHIgQ/7qm1I0wUHK1eJnn2y2w==" + }, + "node_modules/sanitize-html/node_modules/escape-string-regexp": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", + "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==", + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/sanitize-html/node_modules/has-flag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==", + "engines": { + "node": ">=4" + } + }, + "node_modules/sanitize-html/node_modules/htmlparser2": { + "version": "3.10.1", + "resolved": "https://registry.npmjs.org/htmlparser2/-/htmlparser2-3.10.1.tgz", + "integrity": "sha512-IgieNijUMbkDovyoKObU1DUhm1iwNYE/fuifEoEHfd1oZKZDaONBSkal7Y01shxsM49R4XaMdGez3WnF9UfiCQ==", + "dependencies": { + "domelementtype": "^1.3.1", + "domhandler": "^2.3.0", + "domutils": "^1.5.1", + "entities": "^1.1.1", + "inherits": "^2.0.1", + "readable-stream": "^3.1.1" + } + }, + "node_modules/sanitize-html/node_modules/picocolors": { + "version": "0.2.1", + "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-0.2.1.tgz", + "integrity": "sha512-cMlDqaLEqfSaW8Z7N5Jw+lyIW869EzT73/F5lhtY9cLGoVxSXznfgfXMO0Z5K0o0Q2TkTXq+0KFsdnSe3jDViA==" + }, + "node_modules/sanitize-html/node_modules/postcss": { + "version": "7.0.39", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-7.0.39.tgz", + "integrity": "sha512-yioayjNbHn6z1/Bywyb2Y4s3yvDAeXGOyxqD+LnVOinq6Mdmd++SW2wUNVzavyyHxd6+DxzWGIuosg6P1Rj8uA==", + "dependencies": { + "picocolors": "^0.2.1", + "source-map": "^0.6.1" + }, + "engines": { + "node": ">=6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + } + }, + "node_modules/sanitize-html/node_modules/supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dependencies": { + "has-flag": "^3.0.0" + }, + "engines": { + "node": ">=4" + } + }, "node_modules/sass": { "version": "1.60.0", "resolved": "https://registry.npmjs.org/sass/-/sass-1.60.0.tgz", @@ -11678,6 +13104,27 @@ "node": ">= 0.8.0" } }, + "node_modules/set-blocking": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/set-blocking/-/set-blocking-2.0.0.tgz", + "integrity": "sha512-KiKBS8AnWGEyLzofFfmvKwpdPzqiy16LvQfK3yv/fVH7Bj13/wl3JSR1J+rfgRE9q7xUJK4qvgS8raSOeLUehw==" + }, + "node_modules/set-function-length": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/set-function-length/-/set-function-length-1.2.1.tgz", + "integrity": "sha512-j4t6ccc+VsKwYHso+kElc5neZpjtq9EnRICFZtWyBsLojhmeF/ZBd/elqm22WJh/BziDe/SBiOeAt0m2mfLD0g==", + "dependencies": { + "define-data-property": "^1.1.2", + "es-errors": "^1.3.0", + "function-bind": "^1.1.2", + "get-intrinsic": "^1.2.3", + "gopd": "^1.0.1", + "has-property-descriptors": "^1.0.1" + }, + "engines": { + "node": ">= 0.4" + } + }, "node_modules/setimmediate": { "version": "1.0.5", "resolved": "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz", @@ -11767,13 +13214,17 @@ } }, "node_modules/side-channel": { - "version": "1.0.4", - "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.0.4.tgz", - "integrity": "sha512-q5XPytqFEIKHkGdiMIrY10mvLRvnQh42/+GoBlFW3b2LXLE2xxJpZFdm94we0BaoV3RwJyGqg5wS7epxTv0Zvw==", + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/side-channel/-/side-channel-1.0.6.tgz", + "integrity": "sha512-fDW/EZ6Q9RiO8eFG8Hj+7u/oW+XrPTIChwCOM2+th2A6OblDtYYIpve9m+KvI9Z4C9qSEXlaGR6bTEYHReuglA==", "dependencies": { - "call-bind": "^1.0.0", - "get-intrinsic": "^1.0.2", - "object-inspect": "^1.9.0" + "call-bind": "^1.0.7", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.4", + "object-inspect": "^1.13.1" + }, + "engines": { + "node": ">= 0.4" }, "funding": { "url": "https://github.com/sponsors/ljharb" @@ -11933,12 +13384,29 @@ "resolved": "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.0.3.tgz", "integrity": "sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g==" }, + "node_modules/srcset": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/srcset/-/srcset-1.0.0.tgz", + "integrity": "sha512-UH8e80l36aWnhACzjdtLspd4TAWldXJMa45NuOkTTU+stwekswObdqM63TtQixN4PPd/vO/kxLa6RD+tUPeFMg==", + "dependencies": { + "array-uniq": "^1.0.2", + "number-is-nan": "^1.0.0" + }, + "engines": { + "node": ">=0.10.0" + } + }, "node_modules/stable": { "version": "0.1.8", "resolved": "https://registry.npmjs.org/stable/-/stable-0.1.8.tgz", "integrity": "sha512-ji9qxRnOVfcuLDySj9qzhGSEFVobyt1kIOSkj1qZzYLzq7Tos/oUUWvotUPQLlrsidqsK6tBH89Bc9kL5zHA6w==", "deprecated": "Modern JS already guarantees Array#sort() is a stable sort, so this library is deprecated. See the compatibility table on MDN: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#browser_compatibility" }, + "node_modules/state-local": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/state-local/-/state-local-1.0.7.tgz", + "integrity": "sha512-HTEHMNieakEnoe33shBYcZ7NX83ACUjCu8c40iOGEZsngj9zRnkqS9j1pqQPXwobB0ZcVTk27REb7COQ0UR59w==" + }, "node_modules/state-toggle": { "version": "1.0.3", "resolved": "https://registry.npmjs.org/state-toggle/-/state-toggle-1.0.3.tgz", @@ -12084,6 +13552,39 @@ "postcss": "^8.2.15" } }, + "node_modules/superagent": { + "version": "7.1.6", + "resolved": "https://registry.npmjs.org/superagent/-/superagent-7.1.6.tgz", + "integrity": "sha512-gZkVCQR1gy/oUXr+kxJMLDjla434KmSOKbx5iGD30Ql+AkJQ/YlPKECJy2nhqOsHLjGHzoDTXNSjhnvWhzKk7g==", + "deprecated": "Please downgrade to v7.1.5 if you need IE/ActiveXObject support OR upgrade to v8.0.0 as we no longer support IE and published an incorrect patch version (see https://github.com/visionmedia/superagent/issues/1731)", + "dependencies": { + "component-emitter": "^1.3.0", + "cookiejar": "^2.1.3", + "debug": "^4.3.4", + "fast-safe-stringify": "^2.1.1", + "form-data": "^4.0.0", + "formidable": "^2.0.1", + "methods": "^1.1.2", + "mime": "2.6.0", + "qs": "^6.10.3", + "readable-stream": "^3.6.0", + "semver": "^7.3.7" + }, + "engines": { + "node": ">=6.4.0 <13 || >=14" + } + }, + "node_modules/superagent/node_modules/mime": { + "version": "2.6.0", + "resolved": "https://registry.npmjs.org/mime/-/mime-2.6.0.tgz", + "integrity": "sha512-USPkMeET31rOMiarsBNIHZKLGgvKc/LrjofAnBlOttf5ajRvqiRA8QsenbcooctK6d6Ts6aqZXBA+XbkKthiQg==", + "bin": { + "mime": "cli.js" + }, + "engines": { + "node": ">=4.0.0" + } + }, "node_modules/supports-color": { "version": "7.2.0", "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz", @@ -12786,6 +14287,15 @@ "node": ">=6" } }, + "node_modules/url": { + "version": "0.11.3", + "resolved": "https://registry.npmjs.org/url/-/url-0.11.3.tgz", + "integrity": "sha512-6hxOLGfZASQK/cijlZnZJTq8OXAkt/3YGfQX45vvMYXpZoo8NdWZcY73K108Jf759lS1Bv/8wXnHDTSz17dSRw==", + "dependencies": { + "punycode": "^1.4.1", + "qs": "^6.11.2" + } + }, "node_modules/url-loader": { "version": "4.1.1", "resolved": "https://registry.npmjs.org/url-loader/-/url-loader-4.1.1.tgz", @@ -12859,6 +14369,20 @@ "node": ">=4" } }, + "node_modules/url/node_modules/qs": { + "version": "6.12.0", + "resolved": "https://registry.npmjs.org/qs/-/qs-6.12.0.tgz", + "integrity": "sha512-trVZiI6RMOkO476zLGaBIzszOdFPnCCXHPG9kn0yuS1uz6xdVxPfZdB3vUig9pxPFDM9BRAgz/YUIVQ1/vuiUg==", + "dependencies": { + "side-channel": "^1.0.6" + }, + "engines": { + "node": ">=0.6" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, "node_modules/use-composed-ref": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/use-composed-ref/-/use-composed-ref-1.3.0.tgz", @@ -12904,11 +14428,24 @@ "react": "^16.8.0 || ^17.0.0 || ^18.0.0" } }, + "node_modules/util": { + "version": "0.10.4", + "resolved": "https://registry.npmjs.org/util/-/util-0.10.4.tgz", + "integrity": "sha512-0Pm9hTQ3se5ll1XihRic3FDIku70C+iHUdT/W926rSgHV5QgXsYbKZN8MSC3tJtSkhuROzvsQjAaFENRXr+19A==", + "dependencies": { + "inherits": "2.0.3" + } + }, "node_modules/util-deprecate": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==" }, + "node_modules/util/node_modules/inherits": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", + "integrity": "sha512-x00IRNXNy63jwGkJmzPigoySHbaqpNuzKbBOmzK+g2OdZpQ9w+sxCN+VSB3ja7IAge2OP2qpfxTjeNcyjmW1uw==" + }, "node_modules/utila": { "version": "0.4.0", "resolved": "https://registry.npmjs.org/utila/-/utila-0.4.0.tgz", @@ -13459,6 +14996,11 @@ "node": ">= 8" } }, + "node_modules/which-module": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/which-module/-/which-module-2.0.1.tgz", + "integrity": "sha512-iBdZ57RDvnOR9AGBhML2vFZf7h8vmBjhoaZqODJBFWHVtKkDmKuHai3cx5PgVMrX5YDNp27AofYbAwctSS+vhQ==" + }, "node_modules/widest-line": { "version": "4.0.1", "resolved": "https://registry.npmjs.org/widest-line/-/widest-line-4.0.1.tgz", @@ -13593,6 +15135,11 @@ "node": ">=0.4" } }, + "node_modules/y18n": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/y18n/-/y18n-4.0.3.tgz", + "integrity": "sha512-JKhqTOwSrqNA1NY5lSztJ1GrBiUodLMmIZuLiDaMRJ+itFd+ABVE8XBjOvIWL+rSqNDC74LCSFmlb/U4UZ4hJQ==" + }, "node_modules/yallist": { "version": "3.1.1", "resolved": "https://registry.npmjs.org/yallist/-/yallist-3.1.1.tgz", @@ -13606,6 +15153,88 @@ "node": ">= 6" } }, + "node_modules/yargs": { + "version": "15.4.1", + "resolved": "https://registry.npmjs.org/yargs/-/yargs-15.4.1.tgz", + "integrity": "sha512-aePbxDmcYW++PaqBsJ+HYUFwCdv4LVvdnhBy78E57PIor8/OVvhMrADFFEDh8DHDFRv/O9i3lPhsENjO7QX0+A==", + "dependencies": { + "cliui": "^6.0.0", + "decamelize": "^1.2.0", + "find-up": "^4.1.0", + "get-caller-file": "^2.0.1", + "require-directory": "^2.1.1", + "require-main-filename": "^2.0.0", + "set-blocking": "^2.0.0", + "string-width": "^4.2.0", + "which-module": "^2.0.0", + "y18n": "^4.0.0", + "yargs-parser": "^18.1.2" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/yargs-parser": { + "version": "18.1.3", + "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-18.1.3.tgz", + "integrity": "sha512-o50j0JeToy/4K6OZcaQmW6lyXXKhq7csREXcDwk2omFPJEwUNOVtJKvmDr9EI1fAJZUyZcRF7kxGBWmRXudrCQ==", + "dependencies": { + "camelcase": "^5.0.0", + "decamelize": "^1.2.0" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/yargs-parser/node_modules/camelcase": { + "version": "5.3.1", + "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz", + "integrity": "sha512-L28STB170nwWS63UjtlEOE3dldQApaJXZkOI1uMFfzf3rRuPegHaHesyee+YxQ+W6SvRDQV6UrdOdRiR153wJg==", + "engines": { + "node": ">=6" + } + }, + "node_modules/yargs/node_modules/cliui": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/cliui/-/cliui-6.0.0.tgz", + "integrity": "sha512-t6wbgtoCXvAzst7QgXxJYqPt0usEfbgQdftEPbLL/cvv6HPE5VgvqCuAIDR0NgU52ds6rFwqrgakNLrHEjCbrQ==", + "dependencies": { + "string-width": "^4.2.0", + "strip-ansi": "^6.0.0", + "wrap-ansi": "^6.2.0" + } + }, + "node_modules/yargs/node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==" + }, + "node_modules/yargs/node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/yargs/node_modules/wrap-ansi": { + "version": "6.2.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-6.2.0.tgz", + "integrity": "sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==", + "dependencies": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "engines": { + "node": ">=8" + } + }, "node_modules/yocto-queue": { "version": "0.1.0", "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz", diff --git a/package.json b/package.json index 12d2a27..57010fa 100644 --- a/package.json +++ b/package.json @@ -22,11 +22,13 @@ "@svgr/webpack": "^7.0.0", "clsx": "^1.2.1", "docusaurus-plugin-sass": "^0.2.3", + "docusaurus-preset-openapi": "^0.6.4", "prism-react-renderer": "^1.3.5", "react": "^17.0.2", "react-dom": "^17.0.2", "react-icons": "^4.8.0", - "sass": "^1.60.0" + "sass": "^1.60.0", + "url": "^0.11.3" }, "browserslist": { "production": [ @@ -48,4 +50,4 @@ "tmp": "^0.2.1", "typescript": "^5.0.2" } -} \ No newline at end of file +} diff --git a/sidebars.js b/sidebars.js index 1af50d9..6384372 100644 --- a/sidebars.js +++ b/sidebars.js @@ -14,7 +14,29 @@ /** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */ const sidebars = { // By default, Docusaurus generates a sidebar from the docs folder structure - mainSidebar: [{ type: "autogenerated", dirName: "." }], + defraSidebar: [{ + type: "autogenerated", + dirName: "defradb", + }], + sourcehubSidebar: [ + { + type: "autogenerated", + dirName: "sourcehub" + }, + { + type: 'link', + label: 'API', // The link label + href: '/sourcehub/api', // The internal path + }, + ], + orbisSidebar: [{ + type: "autogenerated", + dirName: "orbis" + }], + lensvmSidebar: [{ + type: "autogenerated", + dirName: "lensvm" + }] // But you can create a sidebar manually /* diff --git a/src/code-theme/code-theme-light.js b/src/code-theme/code-theme-light.js deleted file mode 100644 index 450c942..0000000 --- a/src/code-theme/code-theme-light.js +++ /dev/null @@ -1,90 +0,0 @@ -var theme = { - plain: { - color: "#a9b1d6", - backgroundColor: "#1a1b26", - }, - styles: [ - { - types: ["comment"], - style: { - fontStyle: "italic", - }, - }, - { - types: ["keyword", "operator"], - style: { - color: "rgb(137, 221, 255)", - }, - }, - { - types: ["punctuation"], - style: { - color: "rgb(68, 75, 106)", - }, - }, - { - types: ["builtin", "number"], - style: { - color: "rgb(255, 158, 100)", - }, - }, - { - types: ["string", "symbol", "constant", "attr-name"], - style: { - color: "rgb(158, 206, 106)", - }, - }, - { - types: ["function"], - style: { - color: "rgb(13, 185, 215)", - }, - }, - { - types: ["tag"], - style: { - color: "rgb(247, 118, 142)", - }, - }, - { - types: ["variable"], - style: { - color: "rgb(224, 175, 104)", - }, - }, - { - types: ["char"], - style: { - color: "rgb(187, 154, 247)", - }, - }, - { - types: ["property"], - style: { - color: "rgb(154, 189, 245)", - }, - }, - { - types: ["inserted"], - style: { - color: "rgb(68, 157, 171)", - }, - }, - { - types: ["deleted"], - style: { - color: "rgb(145, 76, 84)", - }, - }, - { - types: ["changed"], - style: { - color: "rgb(97, 131, 187)", - }, - }, - ], -}; - -module.exports = theme; - -module.exports = theme; diff --git a/src/code-theme/code-theme.js b/src/code-theme/code-theme.js new file mode 100644 index 0000000..48a93ee --- /dev/null +++ b/src/code-theme/code-theme.js @@ -0,0 +1,108 @@ +const theme = { + plain: { + color: "var(--code-foreground)", + backgroundColor: "var(--code-background)", // assuming background is handled by container + }, + styles: [ + { + types: ["comment"], + style: { + color: "var(--code-token-comment)", + fontStyle: "italic", + }, + }, + { + types: ["keyword", "builtin", "changed"], + style: { + color: "var(--code-token-keyword)", + }, + }, + { + types: ["constant", "property", "class-name"], + style: { + color: "var(--code-token-constant)", + }, + }, + { + types: ["string", "inserted", "attr-value"], + style: { + color: "var(--code-token-string)", + }, + }, + { + types: ["string-expression"], + style: { + color: "var(--code-token-string-expression)", + }, + }, + { + types: ["number"], + style: { + color: "var(--code-token-number)", + }, + }, + { + types: ["punctuation", "operator"], + style: { + color: "var(--code-token-punctuation)", + }, + }, + { + types: ["function"], + style: { + color: "var(--code-token-function)", + }, + }, + { + types: ["variable", "parameter"], + style: { + color: "var(--code-token-parameter)", + }, + }, + { + types: ["attr-name", "maybe-class-name"], + style: { + color: "var(--code-token-property)", + }, + }, + { + types: ["url", "link"], + style: { + color: "var(--code-token-link)", + textDecoration: "underline", + }, + }, + { + types: ["tag"], + style: { + color: "var(--code-token-keyword)", // reuse keyword color for tags + }, + }, + { + types: ["deleted"], + style: { + color: "red", + }, + }, + { + types: ["important", "bold"], + style: { + fontWeight: "bold", + }, + }, + { + types: ["italic"], + style: { + fontStyle: "italic", + }, + }, + { + types: ["highlight"], + style: { + backgroundColor: "var(--code-highlight-color)", + }, + }, + ], +}; + +module.exports = theme; diff --git a/src/css/custom.scss b/src/css/custom.scss index f39636d..9d90e95 100644 --- a/src/css/custom.scss +++ b/src/css/custom.scss @@ -22,13 +22,53 @@ body { :root { --root-wrapper-width: 90rem; --menu-indicator-color: #ccc; + + // Light theme + --code-foreground: #000000; + --code-background: #f3f3f3; + --code-border: #d1d1d1; + --code-token-keyword: #5a2cbc; + --code-token-constant: #1a7032; + --code-token-string: #d18c0c; + --code-token-comment: #7e7e7e; + --code-token-parameter: hsl(var(--foreground-light) / 1); + --code-token-function: #1a7032; + --code-token-string-expression: #f1a10d; + --code-token-punctuation: hsl(var(--foreground-light) / 1); + --code-token-link: hsl(var(--foreground-light) / 1); + --code-token-number: hsl(var(--foreground-light) / 1); + --code-token-property: rgb(105, 76, 156); + --code-highlight-color: #1c1c1c; +} + +pre { + border: 2px solid var(--code-border); } -// html[data-theme="dark"] { // Dark overrides -// } + +html[data-theme="dark"] { + // Dark theme + --code-foreground: #fffff; + --code-background: #191919; + --code-border: #252525; + --code-token-keyword: #aca4ff; + --code-foreground: #ffffff; + --code-token-constant: #3ecf6e; + --code-token-string: #e7bb94; + --code-token-comment: #7e7e7e; + --code-token-parameter: #ffffff; + --code-token-function: #3ecf6e; + --code-token-string-expression: #ffcda1; + --code-token-punctuation: #ffffff; + --code-token-link: #ffffff; + --code-token-number: #ffffff; + --code-token-property: rgb(180, 148, 234); + --code-highlight-color: #232323; +} // Layout + .footer > .container, .navbar__inner, .main-wrapper { @@ -37,6 +77,18 @@ body { margin: auto; } +// Utilities + +.spacing-horz { + padding: 0 var(--ifm-spacing-horizontal); +} + +.block-section { + padding-top: rem(50px); + padding-bottom: rem(50px); + border-top: rem(1px) solid var(--docs-title-border); +} + // navbar .docusaurus-highlight-code-line { @@ -113,6 +165,38 @@ body { // Sidebar menu .main-wrapper { + .menu__list { + .menu__list { + position: relative; + + &:before { + content: ""; + display: block; + width: rem(1px); + background: var(--docs-title-border); + position: absolute; + left: rem(15px); + top: 0; + bottom: 0; + } + + .menu__link--active:not(.menu__link--sublist) { + position: relative; + + &:before { + content: ""; + display: block; + width: rem(1px); + background: var(--ifm-color-primary); + position: absolute; + left: 0; + top: 0; + bottom: 0; + } + } + } + } + .menu { $self: &; @@ -123,14 +207,14 @@ body { font-size: rem(14px); &--active { - font-weight: bold; + font-weight: 500; } &--sublist { position: relative; + &.menu__link--active { color: var(--ifm-menu-color); - font-weight: bold; } &:after { @@ -158,7 +242,6 @@ body { .menu__link--sublist { &:after { transform: rotate(45deg); - // background: var(--menu-indicator-color); } } } @@ -166,23 +249,12 @@ body { ul { font-weight: 400; - > li { - ul { - padding-left: rem(10px); - margin-top: rem(10px); - margin-bottom: rem(10px); - li { - padding: rem(2px) 0; - margin: 0; - position: relative; - } - } - } } } } // Breadcrumbs + .breadcrumbs { margin: rem(30px) 0 rem(20px); font-size: rem(12px); @@ -200,7 +272,6 @@ body { &:last-child { color: var(--ifm-color-primary); - opacity: 0.8; } &:not(:last-child):after { @@ -222,7 +293,7 @@ body { .theme-edit-this-page { color: var(--ifm-menu-color); - font-weight: 700; + font-weight: 500; font-size: 14px; // border-top: 1px solid var(--docs-title-border); @@ -281,7 +352,7 @@ body { } html[data-theme="dark"] & { - background-color: #202020; + background-color: #000000; border-color: rgb(56, 56, 56); } } @@ -338,3 +409,111 @@ body { display: none; } } + +// Search + +body .DocSearch { + --docsearch-modal-width: 800px; + + &-Button { + margin-left: 1rem; + } + + &-Search-Icon { + height: 1rem; + } + + &-Hit { + &-source { + font-family: "Funnel Display", sans-serif; + color: white; + } + &[aria-selected="true"] a { + background-color: var(--docsearch-hit-background-active); + } + } + + &-Logo { + filter: grayscale(100%); + } +} + +.search-page-wrapper { + --ifm-toc-border-color: var(--docs-title-border); + + a[aria-label="Search by Algolia"] { + opacity: 0.5; + filter: grayscale(100%); + } + + main { + --ifm-h2-font-size: 1rem; + + article { + border-top: 1px solid var(--docs-title-border); + + a { + color: inherit; + } + } + + .breadcrumbs { + margin: 0; + + &__item { + &:after { + padding: 0 0.5rem; + } + + &:first-child { + display: inline-block; + } + } + } + } +} + +[data-theme="light"] .DocSearch { + /* --docsearch-primary-color: var(--ifm-color-primary); */ + /* --docsearch-text-color: var(--ifm-font-color-base); */ + --docsearch-muted-color: var(--ifm-color-secondary-darkest); + --docsearch-container-background: rgba(94, 100, 112, 0.7); + /* Modal */ + --docsearch-modal-background: var(--ifm-color-secondary-lighter); + /* Search box */ + --docsearch-searchbox-background: var(--ifm-color-secondary); + --docsearch-searchbox-focus-background: var(--ifm-color-white); + /* Hit */ + --docsearch-hit-color: var(--ifm-font-color-base); + --docsearch-hit-active-color: var(--ifm-color-white); + --docsearch-hit-background: var(--ifm-color-white); + --docsearch-hit-background-active: rgba(94, 100, 112, 0.7); + /* Footer */ + --docsearch-footer-background: var(--ifm-color-white); +} + +[data-theme="dark"] .DocSearch { + --docsearch-text-color: var(--ifm-font-color-base); + --docsearch-muted-color: var(--ifm-color-secondary-darkest); + --docsearch-container-background: rgba(24, 24, 24, 0.7); + --docsearch-highlight-color: var(--ifm-color-primary); + + /* Modal */ + --docsearch-modal-background: var(--ifm-background-color); + /* Search box */ + --docsearch-searchbox-background: var(--ifm-background-color); + --docsearch-searchbox-focus-background: var(--ifm-color-black); + /* Hit */ + --docsearch-hit-color: var(--ifm-font-color-base); + --docsearch-hit-active-color: var(--ifm-color-white); + --docsearch-hit-background: var(--ifm-color-emphasis-100); + --docsearch-hit-background-active: rgba(124, 124, 124, 0.7); + + /* Footer */ + --docsearch-footer-background: var(--ifm-background-surface-color); + --docsearch-key-gradient: linear-gradient( + -26.5deg, + var(--ifm-color-emphasis-200) 0%, + var(--ifm-color-emphasis-100) 100% + ); +} diff --git a/src/css/fonts.scss b/src/css/fonts.scss index 0f85965..1319373 100644 --- a/src/css/fonts.scss +++ b/src/css/fonts.scss @@ -1 +1 @@ -@import url("https://fonts.googleapis.com/css2?family=Inter:wght@400;500;700;800&display=swap"); +@import url("https://fonts.googleapis.com/css2?family=Funnel+Display:wght@300..800&family=Inter:ital,opsz,wght@0,14..32,100..900;1,14..32,100..900&display=swap"); diff --git a/src/css/infima-variables.scss b/src/css/infima-variables.scss index 9ed6a66..5358e18 100644 --- a/src/css/infima-variables.scss +++ b/src/css/infima-variables.scss @@ -1,6 +1,20 @@ +// Custom vars +:root { + --card-color: rgb(255, 255, 255, 0.9); + --card-border-color: #e8e8e8; + --card-box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1); + --card-box-shadow-highlight: 0 2px 25px rgba(31, 96, 233, 0.05); +} + +html[data-theme="dark"] { + --card-color: rgb(29, 29, 29, 0.4); + --card-border-color: #313131; + --card-box-shadow: 0 2px 25px rgba(238, 238, 238, 0.05); +} + :root { // Colors - --ifm-color-primary: #1f60e9; + --ifm-color-primary: #00cd50; --ifm-color-primary-dark: #1554d8; --ifm-color-primary-darker: #144fcc; --ifm-color-primary-darkest: #1141a8; @@ -9,6 +23,10 @@ --ifm-color-primary-lightest: #6793f0; --ifm-color-secondary-contrast-background: rgb(236, 236, 236); + --ifm-color-info-dark: rgb(0, 205, 80); + --ifm-color-info-darker: rgb(7, 160, 66); + --ifm-color-info-darkest: rgb(4, 146, 59); + // Fonts --ifm-font-family-base: "Inter", system-ui, -apple-system, Segoe UI, Roboto, Ubuntu, Cantarell, Noto Sans, sans-serif, BlinkMacSystemFont, "Segoe UI", @@ -16,14 +34,16 @@ "Segoe UI Symbol"; --ifm-font-size-base: 1rem; --ifm-font-weight-bold: 800; + --ifm-heading-font-family: "Funnel Display", sans-serif; + --ifm-heading-font-weight: 400; // Base --ifm-global-radius: 0rem; // Code - --ifm-code-background: rgb(226, 226, 226); --ifm-code-font-size: 80%; + --ifm-code-border-radius: 0.5rem; --ifm-pre-padding: 1rem; --ifm-leading-desktop: 1.75; @@ -45,28 +65,30 @@ --docs-title-border: #e0e0e0; --ifm-alert-border-left-width: 2px; + --ifm-alert-border-radius: 0.5rem; - /* Horizontal Rules. */ - // --ifm-breadcrumb-border-radius: 1.5rem; + /* Horizontal Rules. */ --ifm-breadcrumb-spacing: 0rem; --ifm-breadcrumb-color-active: inherit; --ifm-breadcrumb-item-background-active: transparent; - // --ifm-breadcrumb-padding-horizontal: 0.8rem; - // --ifm-breadcrumb-padding-vertical: 0.4rem; - // --ifm-breadcrumb-size-multiplier: 1; --ifm-breadcrumb-separator: none; - // --ifm-breadcrumb-separator-filter: none; - // --ifm-breadcrumb-separator-size: 0.5rem; - // --ifm-breadcrumb-separator-size-multiplier: 1.25; + --ifm-menu-color: var(--ifm-color-emphasis-700); + --ifm-menu-color-active: var(--ifm-color-primary); + --ifm-color-info-contrast-background: rgb(255, 255, 255); + --ifm-color-info-contrast-foreground: rgb(0, 0, 0); } html[data-theme="dark"] { - --ifm-color-primary: #5087fd; - --ifm-background-color: #1d1d1d; + --ifm-color-primary: #00cd50; + --ifm-background-color: #000000; --ifm-background-surface-color: var(--ifm-background-color); --docs-title-border: #343434; --ifm-toc-border-color: transparent; + --ifm-menu-color: var(--ifm-color-emphasis-400); + + --ifm-color-info-contrast-background: rgb(0, 0, 0); + --ifm-color-info-contrast-foreground: rgb(255, 255, 255); } .markdown { diff --git a/src/pages/index.module.scss b/src/pages/index.module.scss new file mode 100644 index 0000000..9e7b161 --- /dev/null +++ b/src/pages/index.module.scss @@ -0,0 +1,152 @@ +@import "http://github.com/utils"; + +.homeWrapper { + margin: auto 0; + + .container { + max-width: var(--root-wrapper-width); + } +} + +.heroBanner { + padding-top: rem(100px); + position: relative; + display: flex; + align-items: center; + + .heroContent { + display: flex; + justify-content: space-between; + align-items: center; + .heroText { + max-width: rem(900px); + } + } + + .heroTitle { + font-size: rem(48px); + line-height: 1; + } + + .heroSubTitle { + font-size: rem(20px); + } + + @media screen and (max-width: 996px) { + padding-top: rem(50px); + + .heroTitle { + font-size: rem(26px); + } + + .heroSubTitle { + font-size: rem(16px); + br { + display: none; + } + } + } +} + +.features { + padding: rem(32px) 0; + + @media screen and (max-width: 996px) { + padding: 0 0 rem(32px); + } +} + +.arrow { + margin-left: rem(5px); + transition: all 0.3s ease; +} + +.card { + display: block; + background-color: var(--card-color); + border: rem(1px) solid var(--card-border-color); + border-radius: rem(8px); + padding: rem(32px); + text-align: center; + color: var(--ifm-font-color-base); + box-shadow: var(--card-box-shadow); + margin-bottom: rem(10px); + transition: all 0.3s ease; + + h3, + p { + margin-bottom: 0; + } + + p { + font-size: rem(14px); + } + + img { + width: rem(60px); + margin-bottom: rem(10px); + } + + &:hover { + transform: translateY(-2px) scale(1.02); + text-decoration: none; + color: var(--ifm-font-color-base); + border-color: var(--ifm-color-primary); + box-shadow: var(--card-box-shadow-highlight); + + .arrow { + transform: translate(3px, 0); + } + } + + @media screen and (max-width: 996px) { + display: flex; + text-align: left; + + img { + width: rem(30px); + margin: auto 15px auto 0; + } + h3 { + font-size: rem(16px); + } + } +} + +.linkList { + a { + display: block; + font-size: rem(14px); + } +} + +.community { + border-bottom: 1px solid var(--docs-title-border); + + h3 { + margin-bottom: 0; + color: var(--ifm-font-color-base); + } + + .communityLinks { + margin-top: rem(40px); + + .communityLink { + &:hover { + text-decoration: none; + + p { + text-decoration: underline; + } + + .arrow { + transform: rotate(-45deg) translate(3px, 0px); + } + } + } + } + + .arrow { + transform: rotate(-45deg); + } +} diff --git a/src/pages/index.tsx b/src/pages/index.tsx new file mode 100644 index 0000000..a0f6217 --- /dev/null +++ b/src/pages/index.tsx @@ -0,0 +1,166 @@ +import Head from '@docusaurus/Head'; +import Link from '@docusaurus/Link'; +import Layout from '@theme/Layout'; +import clsx from 'clsx'; +import React, { FC } from 'react'; +import IconThemeArrow from '../theme/IconArrow'; +import styles from './index.module.scss'; + +const HomepageHeader: FC<{}> = () => { + return ( +
+
+
+
+

Source Network Developer Hub

+

Your guide to building with the Source Network stack.
Get started, explore the docs, and discover the power of decentralized data.

+
+
+
+
+
+
+ ); +} + +const HomepageFeatures: FC<{}> = () => { + const features = [ + { + link: "/defradb", + image: "./img/product/defradb.svg", + title: "DefraDB", + subTitle: "Deploy decentralized databases", + }, + { + link: "/sourcehub", + image: "./img/product/sourcehub.svg", + title: "SourceHub", + subTitle: "Build trust & interoperability", + }, + { + link: "/orbis", + image: "./img/product/orbis.svg", + title: "Orbis", + subTitle: "Distributed secrets management", + }, + { + link: "/lensvm", + image: "./img/product/lensvm.svg", + title: "LensVM", + subTitle: "Bidirectional data transformation", + } + ] + + return ( +
+
+
+ {features.map((feature, i) => { + return
+ + +
+

{feature.title}

+

{feature.subTitle}

+
+ +
+ })} +
+
+
+ ); +} + +const HomepageReferenceLinks: FC<{}> = () => { + return ( +
+
+

Quick Reference

+

A collection of guides and references to help you navigate the Source Network.

+ +
+
+ DefraDB Query Language Overview + DefraDB CLI Reference + DefraDB Peer-to-Peer Guide + DefraDB Schema Migration Guide +
+
+ SourceHub Getting Started + SourceHub API + Orbis Installation + Orbis Setup Authorization Policy +
+
+
+
+
+
+ ); +} + +const HomepageCommunity: FC<{}> = () => { + const links = [ + { + link: "https://discord.source.network/", + title: "Discord", + linkText: "Join our server" + }, + { + link: "https://github.com/sourcenetwork/docs.source.network", + title: "Github", + linkText: "Contritube to Source" + }, + { + link: "https://x.com/sourcenetwrk", + title: "Twitter", + linkText: "Follow us on Twitter" + }, + { + link: "https://t.me/source_network", + title: "Telegram", + linkText: "Join the chat" + } + ] + + return ( +
+

Join Our Community

+

Engage with our developer community and the Source team to get help, exchange ideas & collaborate.

+
+ {links.map((lnk, i) => { + return
+ +
+

{lnk.title}

+

{lnk.linkText}

+
+ +
+ })} +
+
+ ); +} + +export default function Home() { + return ( + + + + + +
+ +
+ + + +
+
+
+ ); +} diff --git a/static/img/akash/deploy.png b/static/img/akash/deploy.png new file mode 100644 index 0000000..63265b2 Binary files /dev/null and b/static/img/akash/deploy.png differ diff --git a/static/img/akash/info.png b/static/img/akash/info.png new file mode 100644 index 0000000..eb02efd Binary files /dev/null and b/static/img/akash/info.png differ diff --git a/static/img/defradb-cover.png b/static/img/defradb-cover.png new file mode 100644 index 0000000..3f70183 Binary files /dev/null and b/static/img/defradb-cover.png differ diff --git a/static/img/hero_grid_black_1.png b/static/img/hero_grid_black_1.png new file mode 100644 index 0000000..470c408 Binary files /dev/null and b/static/img/hero_grid_black_1.png differ diff --git a/static/img/hero_grid_white_1.png b/static/img/hero_grid_white_1.png new file mode 100644 index 0000000..c77930c Binary files /dev/null and b/static/img/hero_grid_white_1.png differ diff --git a/static/img/icon-defradb.svg b/static/img/icon-defradb.svg new file mode 100644 index 0000000..bd713ae --- /dev/null +++ b/static/img/icon-defradb.svg @@ -0,0 +1,25 @@ + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/static/img/icon-orbis.svg b/static/img/icon-orbis.svg new file mode 100644 index 0000000..0c04d05 --- /dev/null +++ b/static/img/icon-orbis.svg @@ -0,0 +1,29 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/static/img/icon-sourcehub.svg b/static/img/icon-sourcehub.svg new file mode 100644 index 0000000..bda2f40 --- /dev/null +++ b/static/img/icon-sourcehub.svg @@ -0,0 +1,23 @@ + + + + + + + + + + + + + + + + + + + + + + + diff --git a/static/img/orbis/cover.png b/static/img/orbis/cover.png new file mode 100644 index 0000000..fae4faa Binary files /dev/null and b/static/img/orbis/cover.png differ diff --git a/static/img/orbis/pre.png b/static/img/orbis/pre.png new file mode 100644 index 0000000..c161c60 Binary files /dev/null and b/static/img/orbis/pre.png differ diff --git a/static/img/product/defradb.svg b/static/img/product/defradb.svg new file mode 100644 index 0000000..9594dd7 --- /dev/null +++ b/static/img/product/defradb.svg @@ -0,0 +1,3 @@ + + + diff --git a/static/img/product/lensvm.svg b/static/img/product/lensvm.svg new file mode 100644 index 0000000..4e9664e --- /dev/null +++ b/static/img/product/lensvm.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/static/img/product/orbis.svg b/static/img/product/orbis.svg new file mode 100644 index 0000000..882b9f0 --- /dev/null +++ b/static/img/product/orbis.svg @@ -0,0 +1,3 @@ + + + diff --git a/static/img/product/sourcehub.svg b/static/img/product/sourcehub.svg new file mode 100644 index 0000000..cd386e5 --- /dev/null +++ b/static/img/product/sourcehub.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/static/img/source-docs-logo-w_v2.svg b/static/img/source-docs-logo-w_v2.svg new file mode 100644 index 0000000..248bf49 --- /dev/null +++ b/static/img/source-docs-logo-w_v2.svg @@ -0,0 +1,11 @@ + + + + + + + + + + + diff --git a/static/img/source-docs-logo_v2.svg b/static/img/source-docs-logo_v2.svg new file mode 100644 index 0000000..9144417 --- /dev/null +++ b/static/img/source-docs-logo_v2.svg @@ -0,0 +1,11 @@ + + + + + + + + + + + diff --git a/static/img/source-logo-w_v2.svg b/static/img/source-logo-w_v2.svg new file mode 100644 index 0000000..fba581a --- /dev/null +++ b/static/img/source-logo-w_v2.svg @@ -0,0 +1,9 @@ + + + + + + + + + diff --git a/static/img/source-logo_v2.svg b/static/img/source-logo_v2.svg new file mode 100644 index 0000000..82e4ddd --- /dev/null +++ b/static/img/source-logo_v2.svg @@ -0,0 +1,9 @@ + + + + + + + + + diff --git a/static/img/sourcehub-cover-copy.png b/static/img/sourcehub-cover-copy.png new file mode 100644 index 0000000..bb05972 Binary files /dev/null and b/static/img/sourcehub-cover-copy.png differ diff --git a/static/img/sourcehub-cover.png b/static/img/sourcehub-cover.png new file mode 100644 index 0000000..75ef1be Binary files /dev/null and b/static/img/sourcehub-cover.png differ diff --git a/static/img/sourcehub/cu-annotated.png b/static/img/sourcehub/cu-annotated.png new file mode 100644 index 0000000..f2335c8 Binary files /dev/null and b/static/img/sourcehub/cu-annotated.png differ diff --git a/static/img/sourcehub/cu-annotated.png:Zone.Identifier b/static/img/sourcehub/cu-annotated.png:Zone.Identifier new file mode 100644 index 0000000..ae7e0cd --- /dev/null +++ b/static/img/sourcehub/cu-annotated.png:Zone.Identifier @@ -0,0 +1,3 @@ +[ZoneTransfer] +ZoneId=3 +HostUrl=https://raw.githubusercontent.com/sourcenetwork/zanzi/dev/docs/grokking-zanzibar-relbac/cu-annotated.png diff --git a/static/img/sourcehub/faucet.png b/static/img/sourcehub/faucet.png new file mode 100644 index 0000000..892b9ff Binary files /dev/null and b/static/img/sourcehub/faucet.png differ diff --git a/static/img/sourcehub/key-add-output.png b/static/img/sourcehub/key-add-output.png new file mode 100644 index 0000000..2401805 Binary files /dev/null and b/static/img/sourcehub/key-add-output.png differ diff --git a/static/img/sourcehub/object-owner.png b/static/img/sourcehub/object-owner.png new file mode 100644 index 0000000..4f69439 Binary files /dev/null and b/static/img/sourcehub/object-owner.png differ diff --git a/static/img/sourcehub/policy-ids-1.png b/static/img/sourcehub/policy-ids-1.png new file mode 100644 index 0000000..16bd584 Binary files /dev/null and b/static/img/sourcehub/policy-ids-1.png differ diff --git a/static/img/sourcehub/relgraph-simple.png b/static/img/sourcehub/relgraph-simple.png new file mode 100644 index 0000000..98a737c Binary files /dev/null and b/static/img/sourcehub/relgraph-simple.png differ diff --git a/static/img/sourcehub/relgraph-simple.png:Zone.Identifier b/static/img/sourcehub/relgraph-simple.png:Zone.Identifier new file mode 100644 index 0000000..97c94ca --- /dev/null +++ b/static/img/sourcehub/relgraph-simple.png:Zone.Identifier @@ -0,0 +1,3 @@ +[ZoneTransfer] +ZoneId=3 +HostUrl=https://raw.githubusercontent.com/sourcenetwork/zanzi/dev/docs/grokking-zanzibar-relbac/relgraph-simple.png diff --git a/static/img/sourcehub/trust-protocol-defradb.png b/static/img/sourcehub/trust-protocol-defradb.png new file mode 100644 index 0000000..9b2eb6a Binary files /dev/null and b/static/img/sourcehub/trust-protocol-defradb.png differ diff --git a/static/img/sourcehub/ttu-eval.png b/static/img/sourcehub/ttu-eval.png new file mode 100644 index 0000000..3eb7514 Binary files /dev/null and b/static/img/sourcehub/ttu-eval.png differ diff --git a/static/img/sourcehub/ttu-eval.png:Zone.Identifier b/static/img/sourcehub/ttu-eval.png:Zone.Identifier new file mode 100644 index 0000000..b854fba --- /dev/null +++ b/static/img/sourcehub/ttu-eval.png:Zone.Identifier @@ -0,0 +1,3 @@ +[ZoneTransfer] +ZoneId=3 +HostUrl=https://raw.githubusercontent.com/sourcenetwork/zanzi/dev/docs/grokking-zanzibar-relbac/ttu-eval.png diff --git a/static/img/sourcehub/ttu-relgraph-2.png b/static/img/sourcehub/ttu-relgraph-2.png new file mode 100644 index 0000000..2c1aaa2 Binary files /dev/null and b/static/img/sourcehub/ttu-relgraph-2.png differ diff --git a/static/img/sourcehub/ttu-relgraph-2.png:Zone.Identifier b/static/img/sourcehub/ttu-relgraph-2.png:Zone.Identifier new file mode 100644 index 0000000..e16d954 --- /dev/null +++ b/static/img/sourcehub/ttu-relgraph-2.png:Zone.Identifier @@ -0,0 +1,3 @@ +[ZoneTransfer] +ZoneId=3 +HostUrl=https://raw.githubusercontent.com/sourcenetwork/zanzi/dev/docs/grokking-zanzibar-relbac/ttu-relgraph-2.png diff --git a/static/img/sourcehub/ttu-relgraph-3.png b/static/img/sourcehub/ttu-relgraph-3.png new file mode 100644 index 0000000..11d1695 Binary files /dev/null and b/static/img/sourcehub/ttu-relgraph-3.png differ diff --git a/static/img/sourcehub/ttu-relgraph-3.png:Zone.Identifier b/static/img/sourcehub/ttu-relgraph-3.png:Zone.Identifier new file mode 100644 index 0000000..9ba3818 --- /dev/null +++ b/static/img/sourcehub/ttu-relgraph-3.png:Zone.Identifier @@ -0,0 +1,3 @@ +[ZoneTransfer] +ZoneId=3 +HostUrl=https://raw.githubusercontent.com/sourcenetwork/zanzi/dev/docs/grokking-zanzibar-relbac/ttu-relgraph-3.png diff --git a/static/img/sourcehub/ttu-relgraph-annotated.png b/static/img/sourcehub/ttu-relgraph-annotated.png new file mode 100644 index 0000000..177e856 Binary files /dev/null and b/static/img/sourcehub/ttu-relgraph-annotated.png differ diff --git a/static/img/sourcehub/ttu-relgraph-annotated.png:Zone.Identifier b/static/img/sourcehub/ttu-relgraph-annotated.png:Zone.Identifier new file mode 100644 index 0000000..e7a8b58 --- /dev/null +++ b/static/img/sourcehub/ttu-relgraph-annotated.png:Zone.Identifier @@ -0,0 +1,3 @@ +[ZoneTransfer] +ZoneId=3 +HostUrl=https://raw.githubusercontent.com/sourcenetwork/zanzi/dev/docs/grokking-zanzibar-relbac/ttu-relgraph-annotated.png diff --git a/static/img/sourcehub/ttu-relgraph.png b/static/img/sourcehub/ttu-relgraph.png new file mode 100644 index 0000000..bd89490 Binary files /dev/null and b/static/img/sourcehub/ttu-relgraph.png differ diff --git a/static/img/sourcehub/ttu-relgraph.png:Zone.Identifier b/static/img/sourcehub/ttu-relgraph.png:Zone.Identifier new file mode 100644 index 0000000..d93b64b --- /dev/null +++ b/static/img/sourcehub/ttu-relgraph.png:Zone.Identifier @@ -0,0 +1,3 @@ +[ZoneTransfer] +ZoneId=3 +HostUrl=https://raw.githubusercontent.com/sourcenetwork/zanzi/dev/docs/grokking-zanzibar-relbac/ttu-relgraph.png