how are things going here
what do we need to do to provide better support for #C0H28NMAS
@meow: howdy! What we need most atm is PRs for issues. we've been given plenty of money, but don't have time to fix everything that needs fixing
really
hrm
tell me more
so I can get on that and find some help
we have a list of issues "ready for work" at https://github.com/clojars/clojars-web/labels/ready
there are probably a few more that have come in recently that should be tagged with :ready as well
ok, I will review them to see if I can think of how to get volunteers to help with them
great, thanks!
sorry for the quick responses, I'm just swamped with work stuff atm.
I understand
@meow: i want to work on a tool that can detect and fix broken maven metadata xml files
this is needed before atomic commits can be implemented
ok
so what do we need - some kind of parse of xml that can handle misformed xml or something
something that validates maven metadata xml or more generic
it has to do with what happens on partial commits
like when you push a release to the repo there are multiple artifacts that need to be uploaded
foo-1.2.3.pom, foo-1.2.3.pom.asc, foo-1.2.3.jar, etc
and it's possible that something fails before uploading all of them
there is a metadata file that describes all the versions, which can get out of sync with the actual files in the repo
so currently you can download jars from repos that don't have those versions listed in the metadata.xml file
because the jars made it into the repo but the metadata didn't
the idea for atomic deploys, as i understand it, is to use the metadata.xml file to commit a transaction
so the release won't be live until all artifacts have been uploaded and verified according to metadata.xml
atomic commits is the goal then
metadata.xml is needed to commit an atomic transaction
clarify for me: isn't a metadata file referencing versions that are out of sync a problem independent of atomic commits?
and a missing metadata.xml file is the ultimate in borken
@micha: how would you like to proceed? what's the first step? what can I do to help?
Hey all
hey
@meow: correct - bad metadata is its own problem, but needs to be fixed before atomic deploys can be implemented
gotcha
@jonahbenton: any suggestions?
sounds like @micha has the right of it. for problems like this i like approaches like https://github.com/clojars/clojars-web/issues/226#issuecomment-142270596. having the same tool be able to handle an atomic move from a dmz-like upload area to the canonical coordinates, as well as be able to validate/clean/fix artifacts within the canonical tree, probably makes sense. the plan is to continue to keep the repo on a file system?
@jonahbenton: the plan is to move the repo off to a cloud file store in the near future
but that could be the last step in the verify pipeline
@tcrawley: cool. so a tool that verified a group of file system resources for correctness and also delivered an atomic move of said resources to cloud could be utilized to populate the cloud file store with verified artifacts from the current repo, and also handle new artifact uploads going forward?
@jonahbenton: everything in the current repo will end up in the cloud store, correct or not, though we will run a tool across the repo to fix what we can before hand