WIP: RFC 3 : GraphQL API for Duniter Client
TODO
Merge request reports
Activity
187 "sentry": True, 188 "Membership": { 189 "type": "IN", 190 "blockstamp": "22965-7FD42...", 191 "timestamp": "229658678", 192 }, 193 "Revoke": { 194 "revoked": True, 195 "blockstamp": "22965-7FD42...", 196 "timestamp": "229658678", 197 } 198 } 199 } 200 ``` 201 202 To get the full raw document type: I agree. I will group the request examples in one with variables and directives:
query getIdentity($data: Boolean = True!, $raw: Boolean = False!, $merkleRoot: Boolean = False!) { Identity(uid: "john", pubkey: "z7rDt7...", blockstamp: "22765-1AD84...") { data @include(if: $data) { member signature written outdistanced sentry Membership Revoke }, raw @include(if: $raw) { value, signature }, merkleRoot @include(if: $merkleRoot) } } # variables { "data": True, "raw": True, "merkelRoot": True }
I think we could implement a custom directive to support such requests : https://github.com/lirown/graphql-custom-directive
Its available in GraphQL specifications :
Directives can be useful to get out of situations where you otherwise would need to do string manipulation to add and remove fields in your query. Server implementations may also add experimental features by defining completely new directives.
With a custom directive, we could do something like this :
query getIdentity($merkleRoot = Boolean!) { Identity(uid: "john", pubkey: "z7rDt7...", blockstamp: "22765-1AD84...") { data @validate(merkleRoot: $true) { member signature written outdistanced sentry Membership Revoke }, raw @validate(merkleRoot: $true) { value, signature } } }
Edited by insoOkay so I worked all the afternoon on trying to develop a proof of concept. I forked a demo repository from github. Here is the schema I ran :
let typeDefs = ` directive @validate on FIELD_DEFINITION type User { id: String!, name: String, messages: [Message] } type Message { created_at: Int, text: String } type Query { hello: String!, users(name: String): [User], validated_users(name: String): [User] @validate } `;
Running the query Users works nicely :
inso at archlinux in ~/code/graphql-express-sqlite (master) $ curl -XPOST -H 'Content-Type:application/graphql' \ -d 'query Users { users(name: "Bob"){messages{text}}}' \ http://localhost:3000/graphql { "data": { "users": [ { "messages": [ { "text": "hello?" }, { "text": "I like pie" }, ...
But the
validated_users
does not :inso at archlinux in ~/code/graphql-express-sqlite (master) $ curl -XPOST -H 'Content-Type:application/graphql' \ -d 'query Users { validated_users(name: "Bob"){messages{text}}}' \ http://localhost:3000/graphql { "errors": [ { "message": "Expected Iterable, but did not find one for field Query.validated_users.", "locations": [ { "line": 1, "column": 15 } ], "path": [ "validated_users" ] } ], "data": { "validated_users": null }
Indeed, that makes sense. The request is waiting for a list of
Users
, but the directive@validate
is transforming the list to a sha256 hash. Since it does not match the request, GraphQL throws an error.So, now, I wonder how we can pass what request we are trying to validate to GraphQL. Because if we pass the structure of the request, we have to get the full result.
Maybe we need something smarter : a mix of REST and GraphQL.
- We still use GraphQL to send requests about data, because this is very useful
- We use REST to send request to validate data. The process is simple : the server receives a REST request containg a field like
GraphQLRequest: {}
. This field is the request which the server sends on his own endpoint. He get the results, builds the merkle tree, and returns the merkle root as the answer of the REST request.
I don't understand your example...
You create a query to get Users, it works ok. You create a second query that request merkle roots. But you ask for a list of User,where you should ask for a ValidatedUser Type as String. And why do you need a directive if you have a second query to get merkleRoots...
-
What I see here is that the custom directives can not be used to change the field Type returned in a query. That a Query return only the tree of Types defined in his definition. And it makes sense in a strong typed API.
-
In the RFC, the process of validation you propose works well with twos queries. One for the data, one for the merkleRoot of the data. It does not slow the process to have queries for data/raw and other queries to have merkelRoots of data.
-
I don't like the idea of mixing the standards (REST + GraphQL)... I'm too old for that ;-)
So to summerize, I think we can stand on separate data queries and merkelRoot queries, and keep directive only to skip/include fields.
If we want to stay with one query, then we can use directives to skip/include dataMerkelRoot String field / rawMerkleRoot String field.
May be this should works :
query getIdentity($data: Boolean = True!, $raw: Boolean = False!, $dataMerkleRoot: Boolean = False!, , $rawMerkleRoot: Boolean = False!) { Identity(uid: "john", pubkey: "z7rDt7...", blockstamp: "22765-1AD84...") { data @include(if: $data) { member signature written outdistanced sentry Membership Revoke }, dataMerkleRoot @include(if: $dataMerkleRoot) raw @include(if: $raw) { value, signature }, rawMerkleRoot @include(if: $rawMerkleRoot) } } # variables { "data": True, "raw": True, "dataMerkelRoot": True, "rawMerkelRoot": True }
Edited by Vincent Texier-
You create a second query that request merkle roots. But you ask for a list of User,where you should ask for a ValidatedUser Type as String. And why do you need a directive if you have a second query to get merkleRoots...
It was just to test the behaviour of directives when I change the type returned by the query.
I don't like the idea of mixing the standards (REST + GraphQL)... I'm too old for that ;-)
I agree that it looks like more of a dirty patch than something nice :D
The problem is that the merkleRoot has to be validated for a given query. For example, below query :
query getIdentity { member signature written outdistanced sentry Membership Revoke }
Would have a different merkle root than below query :
query getIdentity { member signature written }
That's why we need a merkleRoot query which takes the query in parameters.
The problem here is that the query of GraphQL describes the form of the result. That's a problem for our usecase, because for merkleRoots, we needs the query but with a different form of results (a string instead of an object).
I really have hope in "mutation" requests !
Because in a mutation request, you return whatever type you want...
https://medium.com/@tarkus/validation-and-user-errors-in-graphql-mutations-39ca79cd00bf
I think we are unlocked here, no ?
It seems good. Even if we will have to change our approach a bit :
- With v11 protocol, we should be able to check any data element useful to clients thanks to block headers (so with only 1 request)
- In the meantime, we can check the content of the API response using mutation to compare responses hashes.
The question is : is it really useful to develop this feature since in 1 or 2 years it won't be used anymore ? Should we focus our action on developing GraphQL + V11 protocol ?
is it really useful to develop this feature since in 1 or 2 years it won't be used anymore ?
No, but if the number of users explode, may be we will be urge to improve a bit the BMA module.
Should we focus our action on developing GraphQL + V11 protocol ?
Yes, we should.
So for the RFC, may be you can change the validation process chapter, and I will change the examples if I understand it and the new protocol...
Edited by Vincent Texier
- Resolved by inso
260 261 ```javascript 262 { 263 "data": { 264 "IdentityMerkleTree": "478D46A98F75..." 265 } 266 } 267 ``` 268 269 **Query a list of identities UID:** 270 271 For all list queries, an offset and a limit variable are specified by default. 272 You can use these to paginate your list and avoid a "timebomb" request. 273 274 *A "timebomb" request is a request on an infinitly growing list of entities, 275 leading to slower and bigger responses that can, at the end, crash the server.* I think, because we write the SQL query on the server as we please and we can hardcode a limit in it.
https://www.reindex.io/blog/building-a-graphql-server-with-node-js-and-sql/
331 Timestamp: BLOCK_UID\n 332 SIGNATURE" 333 } 334 ``` 335 336 ## Duniter Server module 337 338 The GraphQL API is added on the server as a Duniter module. 339 As this module add cpu and network charge on the Duniter node, it is optional. 340 341 ### New NoSQL Database 342 343 The good practice to handle data in databases, is: 344 345 - To use **relational database** to **write normalized data** (very fast to update an entity as entities are in separate tables). 346 - To use **document database (NoSQL database)** to **request denormalized data** (very fast as all entities information are agregated in one document). If we use a document database, we have to ensure that on every new block resolved, we update the documents accordingly.
Still, I think it makes sense to use a document database because :
- GraphQL request have very deterministic parameters (3 parameters for example on the getIdentity request)
- These 3 parameters can be used as the indexes of the table http://blog.benjamin-encz.de/post/sqlite-one-to-many-json1-extension/
changed this line in version 6 of the diff
added 1 commit
- e93b9407 - Update 0003 RFC GraphQL API for Duniter Clients.md
added 1 commit
- 8d89ee60 - Modify endpoint with explicit subscriptions path
mentioned in issue clients/python/silkaj#7 (closed)
mentioned in issue clients/python/silkaj#175 (closed)
added 1 commit
- 6e86e218 - [feat] more complete schema in separated file
added 1 commit
- c517d347 - [fix] rfc gva: schema: add raw field for each document