Chapel: Website for Mason

Created on 24 Feb 2020  路  13Comments  路  Source: chapel-lang/chapel

From #7106
I think it would be good to have a website for Mason registry from an user point of view. There has also been some discussion regarding getting the Mason registry off of Github, reason being to reduce the burden of users to publish packages. However, having the registry on GitHub is kind of a safer option for package moderators in the long run.

As a first step to improve this situation, we can consider creating a standard website for Mason Package Manager. The data could be fetched using the Github API. Some functionalities of a website that I can think about now are as follows :

  • A list of most popular packages on the main page.
  • Nav bar which contains tabs for : Documentation(regarding Mason cli & the registry) , search bar to search for packages, option for creating account (if the registry is taken off of github), forum for discussion and community members.
  • If a user searches and opens a link to a package, he is redirected to a new page showing contents of a package such as, Name of the package, information of the Mason.toml file parsed and showed in a systematic way, number of downloads, date of last update, also the content of README file of that project can be displayed.
  • Also, all these would allow us to incorporate a ranking system in the future for ranking relevant packages on the basis of some metrics.
Tools Design Feature Request

All 13 comments

@ben-albrecht what do you think about this issue ?
Could you ask other developers if they would like this feature ? Since, I'll be making my own project proposal for Gsoc, mostly involving work on mason registry and mason, I could also involve this issue in it. I'd love to know what the community thinks about it, also, please mention suggestions, if any.

what do you think about this issue ?

It is a good idea, and something we have wanted to do since mason's inception.

Since, I'll be making my own project proposal for Gsoc, mostly involving work on mason registry and mason, I could also involve this issue in it.

I would say this is not a great candidate GSoC project due to containing a lot of user-facing design and infrastructural design (hosting/mechanism for polling registry for updates). That said, you could work on some subset of this as part of GSoC, or just try to drive the design discussion forward in general.

@ben-albrecht I did start some digging on this issue. Since, it's safer to keep the registry on Github. I inclined to finding ways to get data from the registry repository using Github API.
For example, making a "GET" api call to
https://api.github.com/repos/chapel-lang/mason-registry/contents/Bricks (click to see result)
gives us the list of packages inside the Bricks directory of registry by name.
Also, we can see the list of authors to the packages using
http GET https://api.github.com/repos/chapel-lang/mason-registry/contributors (click for result)

Similarly we can find the details such as readme , forks(solves the package popularity problem) , author name, activity , last update details, for a particular package, using API calls such as
GET https://api.github.com/repos/:owner/:repo_name/contents/readme

What's the problem ?

I could find contributor names and package names from mason registry repo, but I don't know a way to map them so that we can make an API call to that particular package repo of the form :
GET https://api.github.com/repos/:owner/:repo_name

@ankingcodes - I think we will need to get the package repo from the source field of the manifest files in the registry:

e.g.

[brick]
name = "Gnuplot"
authors = ["Marcos Cleison","Owen Plambeck"]
version = "0.0.1"
chplVersion = "1.18.0"
source = "https://github.com/oplambeck/chapel-gnuplot.git"

[dependencies]

@ben-albrecht That would make life so easier, but there's a problem:
I tried to get the contents of mason-registry's README using :
https://api.github.com/repos/chapel-lang/mason-registry/readme

As you can see there, the content is encrypted, base64 i think.

That means, we need to make API calls in the following order :

  • Call to mason-registry/Bricks directory for names of packages
  • Call to each package's TOML file's contents
  • Decryption of content and parsing author and package name
  • Call to each package github repo for other details

Perhaps, if there's some way we design the github repo of mason-registry such that its easier to get the package names and author names to make direct api calls of the form :
GET https://api.github.com/repos/:author/:packagename/contents
This would be much easier,
I feel this project is achievable, so I think other Chapel developers should definitely think about this problem.

That sounds right. I am not sure I follow your suggestion about restructuring the repository. Decoding the base64 contents to utf-8 and parsing the TOML should work fine.

@ben-albrecht I just tried to convert contents of README, from base 64 to utf 8, due to some reason, decoding was not perfect and had unnecessary characters.

I am not sure I follow your suggestion about restructuring the repository.

Making so many api calls and having functions for decoding and parsing, would make the website extremely slow. I was thinking of some kind of cache for the registry.
Do you remember the discussion where I suggested a separate file for keeping points for each package during CI to improve the mason search ?
We could add an author name to that file, and that file itself could serve as our cache.

I just tried to convert contents of README, from base 64 to utf 8, due to some reason, decoding was not perfect and had unnecessary characters.

Mind pasting the code you tried?

I was thinking of some kind of cache for the registry.
Do you remember the discussion where I suggested a separate file for keeping points for each package during CI to improve the mason search ?
We could add an author name to that file, and that file itself could serve as our cache.

Ah yes, I like that idea. The cache would not necessarily have to be in the GitHub repository either, but we can continue with this approach for now.

@ben-albrecht I used an online converter https://www.base64decode.org/
Content of README : https://hastebin.com/ubacuyuxep (in base 64)

Ah yes, I like that idea. The cache would not necessarily have to be in the GitHub repository either, but we can continue with this approach for now.

I'd think some more on this.

This worked for me:

import base64

a = b'Cgo9PT09PT09PT09PT09PQpNYXNvbi1SZWdpc3RyeQo9PT09PT09PT09PT09\nPQoKVGhlIG1hc29uIHJlZ2lzdHJ5IGlzIGEgR2l0SHViIHJlcG9zaXRvcnkg\nY29udGFpbmluZyBhIGxpc3Qgb2YgdmVyc2lvbmVkIG1hbmlmZXN0IGZpbGVz\nLgoKYE1hc29uLVJlZ2lzdHJ5IDxodHRwczovL2dpdGh1Yi5jb20vY2hhcGVs\nLWxhbmcvbWFzb24tcmVnaXN0cnk+YF8uCgpUaGUgcmVnaXN0cnkgc3RydWN0\ndXJlIGlzIGEgaGllcmFyY2h5IGFzIGZvbGxvd3M6CgoKLi4gY29kZS1ibG9j\nazo6IHRleHQKCiByZWdpc3RyeS8KICAgQ3VybC8KICAgICAgMS4wLjAudG9t\nbAogICAgICAyLjAuMC50b21sCiAgIFJlY29yZFBhcnNlci8KICAgICAgMS4w\nLjAudG9tbAogICAgICAxLjEuMC50b21sCiAgICAgIDEuMi4wLnRvbWwKICAg\nVmlzdWFsRGVidWcvCiAgICAgIDIuMi4wLnRvbWwKICAgICAgMi4yLjEudG9t\nbAoKCkVhY2ggdmVyc2lvbmVkIG1hbmlmZXN0IGZpbGUgaXMgaWRlbnRpY2Fs\nIHRvIHRoZSBtYW5pZmVzdCBmaWxlIGluIHRoZSB0b3AtbGV2ZWwgZGlyZWN0\nb3J5Cm9mIHRoZSBwYWNrYWdlIHJlcG9zaXRvcnksIHdpdGggb25lIGV4Y2Vw\ndGlvbiwgYSBVUkwgcG9pbnRpbmcgdG8gdGhlIHJlcG9zaXRvcnkgYW5kIHJl\ndmlzaW9uCmluIHdoaWNoIHRoZSB2ZXJzaW9uIGlzIGxvY2F0ZWQuCgpUaGUg\nJ3JlZ2lzdHJ5JyBgYDAuMS4wLnRvbWxgYCB3b3VsZCBpbmNsdWRlIHRoZSBh\nZGRpdGlvbmFsIHNvdXJjZSBmaWVsZDoKCi4uIGNvZGUtYmxvY2s6OiB0ZXh0\nCgogICAgIFticmlja10KICAgICBuYW1lID0gImhlbGxvX3dvcmxkIgogICAg\nIHZlcnNpb24gPSAiMC4xLjAiCiAgICAgYXV0aG9yID0gWyJTYW0gUGFydGVl\nIDxTYW1AUGFydGVlLmNvbT4iXQogICAgIHNvdXJjZSA9ICJodHRwczovL2dp\ndGh1Yi5jb20vU3BhcnRlZS9oZWxsb193b3JsZCIKCiAgICAgW2RlcGVuZGVu\nY2llc10KICAgICBjdXJsID0gJzEuMC4wJwoKCgoKClRPTUwKPT09PQoKVE9N\nTCBpcyB0aGUgY29uZmlndWF0aW9uIGxhbmd1YWdlIGNob3NlbiBieSB0aGUg\nY2hhcGVsIGRldmVsb3BlcnMgZm9yCmNvbmZpZ3VyaW5nIHByb2dyYW1zIHdy\naXR0ZW4gaW4gY2hhcGVsIHVzaW5nIG1hc29uLiBBIFRPTUwgZmlsZSBjb250\nYWlucwp0aGUgbmVzc2VzY2FyeSBpbmZvcm1hdGlvbiB0byBidWlsZCBhIGNo\nYXBlbCBwcm9ncmFtIHVzaW5nIG1hc29uLiAKYFRPTUwgU3BlYyA8aHR0cHM6\nLy9naXRodWIuY29tL3RvbWwtbGFuZy90b21sPmBfLgoKCgoKClN1Ym1pdCBh\nIHBhY2thZ2UgCj09PT09PT09PT09PT09PT0KClRoZSBtYXNvbiByZWdpc3Ry\neSB3aWxsIGhvbGQgdGhlIG1hbmlmZXN0IGZpbGVzIGZvciBwYWNrYWdlcyBz\ndWJtaXR0ZWQgYnkgZGV2ZWxvcGVycy4KVG8gY29udHJpYnV0ZSBhIHBhY2th\nZ2UgdG8gdGhlIG1hc29uLXJlZ2lzdHJ5IGEgY2hhcGVsIGRldmVsb3BlciB3\naWxsIG5lZWQgdG8gaG9zdCB0aGVpcgpwcm9qZWN0IGFuZCBzdWJtaXQgYSBw\ndWxsIHJlcXVlc3QgdG8gdGhlIG1hc29uLXJlZ2lzdHJ5IHdpdGggdGhlIHRv\nbWwgZmlsZSBwb2ludGluZwp0byB0aGVpciBwcm9qZWN0LiBGb3IgYSBtb3Jl\nIGRldGFpbGVkIGRlc2NyaXB0aW9uIGZvbGxvdyB0aGUgc3RlcHMgYmVsb3cu\nCgpTdGVwczoKICAgICAgMSkgV3JpdGUgYSBsaWJyYXJ5IG9yIGJpbmFyeSBw\ncm9qZWN0IGluIGNoYXBlbCB1c2luZyBtYXNvbgogICAgICAyKSBIb3N0IHRo\nYXQgcHJvamVjdCBpbiBhIGdpdCByZXBvc2l0b3J5LiAoZS5nLiBHaXRIdWIp\nCiAgICAgIDMpIENyZWF0ZSBhIHRhZyBvZiB5b3VyIHBhY2thZ2UgdGhhdCBj\nb3JyZXNwb25kcyB0byB0aGUgdmVyc2lvbiBudW1iZXIgcHJlZml4ZWQgd2l0\naCBhICd2Jy4gKGUuZy4gdjAuMS4wKQogICAgICA0KSBGb3JrIHRoZSBtYXNv\nbi1yZWdpc3RyeSBvbiBHaXRIdWIKICAgICAgNSkgQ3JlYXRlIGEgYnJhbmNo\nIG9mIHRoZSBtYXNvbi1yZWdpc3RyeSBhbmQgYWRkIHlvdXIgcHJvamVjdCdz\nIGBgTWFzb24udG9tbGBgIHVuZGVyIGBgQnJpY2tzLzxwcm9qZWN0X25hbWU+\nLzx2ZXJzaW9uPi50b21sYGAKICAgICAgNikgQWRkIGEgc291cmNlIGZpZWxk\nIHRvIHlvdXIgYGA8dmVyc2lvbj4udG9tbGBgIHBvaW50aW5nIHRvIHlvdXIg\ncHJvamVjdCdzIHJlcG9zaXRvcnkuCiAgICAgIDcpIE9wZW4gYSBQUiBpbiB0\naGUgbWFzb24tcmVnaXN0cnkgZm9yIHlvdXIgbmV3bHkgY3JlYXRlZCBicmFu\nY2ggY29udGFpbmluZyBqdXN0IHlvdXIgPHZlcnNpb24+LnRvbWwuCiAgICAg\nIDgpIFdhaXQgZm9yIG1hc29uLXJlZ2lzdHJ5IGdhdGVrZWVwZXJzIHRvIGFw\ncHJvdmUgdGhlIFBSLgoKT25jZSB5b3VyIHBhY2thZ2UgaXMgdXBsb2FkZWQs\nIG1haW50YWluIHRoZSBpbnRlZ3JpdHkgb2YgeW91ciBwYWNrYWdlLCBhbmQg\ncGxlYXNlIG5vdGlmeSB0aGUKY2hhcGVsIHRlYW0gaWYgeW91ciBwYWNrYWdl\nIHNob3VsZCBiZSB0YWtlbiBkb3duLgoKCk5hbWVzcGFjaW5nCj09PT09PT09\nPT09CgpBbGwgcGFja2FnZXMgd2lsbCBleGlzdCBpbiBhIHNpbmdsZSBjb21t\nb24gbmFtZXNwYWNlIHdpdGggYSBmaXJzdC1jb21lLCBmaXJzdC1zZXJ2ZWQg\ncG9saWN5LgpJdCBpcyBlYXNpZXIgdG8gZ28gdG8gc2VwYXJhdGUgbmFtZXNw\nYWNlcyB0aGFuIHRvIHJvbGwgdGhlbSBiYWNrLCBzbyB0aGlzIHBvc2l0aW9u\nIGFmZm9yZHMKZmxleGliaWxpdHkuCgoKClNlbWFudGljIFZlcnNpb25pbmcK\nPT09PT09PT09PT09PT09PT09PQoKVG8gYXNzaXN0IHZlcnNpb24gcmVzb2x1\ndGlvbiwgdGhlIG1hc29uIHJlZ2lzdHJ5IHdpbGwgZW5mb3JjZSB0aGUgZm9s\nbG93aW5nIGNvbnZlbnRpb25zOgoKVGhlIGZvcm1hdCBmb3IgYWxsIHZlcnNp\nb25zIHdpbGwgYmUgYS5iLmMuCiAgIE1ham9yIHZlcnNpb25zIGFyZSBkZW5v\ndGVkIGJ5IGEuCiAgIE1pbm9yIHZlcnNpb25zIGFyZSBkZW5vdGVkIGJ5IGIu\nCiAgIEJ1ZyBmaXhlcyBhcmUgZGVub3RlZCBieSBjLgoKLSBJZiB0aGUgbWFq\nb3IgdmVyc2lvbiBpcyAwLCBubyBmdXJ0aGVyIGNvbnZlbnRpb25zIHdpbGwg\nYmUgZW5mb3JjZWQuCgotIFRoZSBtYWpvciB2ZXJzaW9uIG11c3QgYmUgYWR2\nYW5jZWQgaWYgYW5kIG9ubHkgaWYgdGhlIHVwZGF0ZSBjYXVzZXMgYnJlYWtp\nbmcgQVBJIGNoYW5nZXMsCiAgc3VjaCBhcyB1cGRhdGVkIGRhdGEgc3RydWN0\ndXJlcyBvciByZW1vdmVkIG1ldGhvZHMgYW5kIHByb2NlZHVyZXMuIFRoZSBt\naW5vciBhbmQgYnVnIGZpeAogIHZlcnNpb25zIHdpbGwgYmUgemVyb2VkIG91\ndC4gKGV4LiAxLjEzLjEgLT4gMi4wLjApCgotIFRoZSBtaW5vciB2ZXJzaW9u\nIG11c3QgYmUgYWR2YW5jZWQgaWYgYW5kIG9ubHkgaWYgdGhlIHVwZGF0ZSBh\nZGRzIGZ1bmN0aW9uYWxpdHkgdG8gdGhlIEFQSQogIHdoaWxlIG1haW50YWlu\naW5nIGJhY2t3YXJkIGNvbXBhdGliaWxpdHkgd2l0aCB0aGUgY3VycmVudCBt\nYWpvciB2ZXJzaW9uLiBUaGUgYnVnIGZpeCAKICB2ZXJzaW9uIHdpbGwgYmUg\nemVyb2VkIG91dC4gKGV4LiAxLjEzLjEgLT4gMS4xNC4wKQoKLSBUaGUgYnVn\nIGZpeCBtdXN0IGJlIGFkdmFuY2VkIGZvciBhbnkgdXBkYXRlIGNvcnJlY3Rp\nbmcgZnVuY3Rpb25hbGl0eSB3aXRoaW4gYSBtaW5vciByZXZpc2lvbi4KICAo\nZXguIDEuMTMuMSAtPiAxLjEzLjIpCgo=\n'
result = base64.b64decode(a)
print(result.decode('utf-8'))

@ben-albrecht worked for me too !

The cache would not necessarily have to be in the GitHub repository

Even if it was a file in Github, we can scrape it using its url (that would be fast). And make an array or map of packages and authors. After that we make the API calls, as usual.

Was this page helpful?
0 / 5 - 0 ratings