Project infrastructure discussion

It has been a couple of weeks since the November developer online meeting https://wiki.octave.org/Online_Developer_Meeting_(2021-11-23) Reviewing the items, one item was to discuss on Discourse ideas to ensure Octave project continuity.

The current list of project infrastructure is listed at https://wiki.octave.org/Project_Infrastructure and @jwe stated he would like to consolidate them.

This discussion is to agree on the scope of what needs to be consolidated, not on identifying specific solution providers yet.

For the following components, should it (A) be discontinued, or (B) continue on the same service provider, or (C) continue on a different service provider?

  • Mercurial code repository (currently Savannah)
  • Bug, patch and task management (currently Savannah)
  • Mailing lists (currently Savannah)
  • Hosting source tarballs and downloads for end users (currently GNU.org)
  • Octave.org domain name registration (currently Dreamhost)
  • octave.org main page (currently GNU.org?)
  • Octave wiki (currently Dreamhost?)
  • Octave doxygen (currently Dreamhost?)
  • Octave buildbots (currently Digitalocean?)
  • hg.octave.org (currently Digitalocean? Is this different from octave: log ?)
  • packages.octave.org (currently Dreamhost?)
  • planet.octave.org blogs (currently Dreamhost?)
  • Discussions (currently Discourse)
  • octave.space website (where is this located?)

Is the above complete? Please fact-check the above list of services and providers. I will edit any errors.

Based on yesterday’s (22 March 2022) Octave online meeting, I reached out to FossHost.org as agreed. They said 72 working hours to respond, so when I get a response from them I’ll post it here.

1 Like

Ayane from FossHost got back to me with questions. I quote their reply here:

Thanks for applying to Fosshost’s services! Great project, but I do have some questions based on the infrastructure document cited in Project Infrastructure - Octave

● Do you have specific requirements on the VPS like CPU architecture and compute capacity requirements? Please let us know in specific detail how much you need.

● Would your buildbot, wiki, etc. require their own instances or would they be in one singular server?

● How many of the services would you consolidate to the VPS? Would it be everything or just a subset mentioned here?

We will provide you with the necessary allocation once we have an idea of your project’s requirements.

Please let’s discuss here which services would make sense to move to FossHost.

I will share this discussion link with Ayane as well.

Many thanks for the request :slightly_smiling_face:

I think a good start is to move is jwe’s dreamhost.com account, as this is mostly static files + php, mysql for the MediaWiki.

Currently we use about 35 GB of data, which will become + 5GB Doxygen documentation with each major release (about once per year):

5.6G	backup                 # old wiki backups
 20K    bugs.octave.org
8.0K    hg.octave.org
 26G    octave.org             # manual + doxygen
 20K    packages.octave.org
 16M    planet.octave.org      # can be removed or kept here?
619M    wiki.octave.org        # our wiki

The server itself is currently a shared system with 32 GB RAM and AMD Opteron™ Processor 4122 (4 cores).

Probably we do not need that much if we have an exclusive VPS, maybe 1-2 cores and 2-4 GB RAM. A Debian system with root access would be great :slightly_smiling_face:

Do we have to decide all at once or can we transition to FossHub step-by-step with revisited demands?

1 Like

Heya, Ayane from Fosshost here, we can grant you the necessary allocation if that’s all what you need. You can read at our documentation at what our x86 VPSes can offer.

2 Likes

Do you have a disk space in mind here? Give me a good estimate on how much disk you would need for the intended appliance to deploy in the VPS :).

1 Like

This is not a final answer, but some input for the discussion.

Here is the fedora buildbot (http://buildbot.octave.org:8010/#/waterfall) disk usage:

[buildbotu@i7 ~]$ du -sm ./fc25-x86_64/*
1	./fc25-x86_64/buildbot.tac
1	./fc25-x86_64/buildbot.tac.new
1	./fc25-x86_64/buildbot.tac.old
1	./fc25-x86_64/buildbot.tac.orig
2221	./fc25-x86_64/clang-fedora
5625	./fc25-x86_64/gcc-fedora
1643	./fc25-x86_64/gcc-lto-fedora
2200	./fc25-x86_64/stable-clang-fedora
5573	./fc25-x86_64/stable-gcc-fedora
1	./fc25-x86_64/twistd.hostname
10	./fc25-x86_64/twistd.log
1	./fc25-x86_64/twistd.pid

(There are 5 workers; the host is i7-2600 w/ 32Gb RAM)
On top of them it has

du -sm  .ccache/
67070	.ccache/

which trades some CPU cycles for diskspace (it is optional, size can be from 0 to ~100GB).
Also during the build the /tmp usage is quite high, though I cannot tell precisely how much.

I think we would like to have at least 2 aarch64 workers (stable and dev branch with default
configurations), and a developer access. Ideally I could see 8 workers (default, clang, opt, 32-bit;
all for both stable and default branches).

@sr229 Thanks for info. Regarding the webspace portion of Octave hosting (website, wiki, documentation, Doxygen). I think that 1-2 cores, 2-4 GB RAM, 100 GB storage will be enough in the next 5 years.

@dasergatskov for the buildbot portion of Octave hosting, I also think we need at least 10 workers for @jwe’s and my Buildbots. Different architectures would be great. Each Buildbot worker should have at least 2 cores, 4 GB RAM, and 50 GB storage.

1 Like

I was talking specific for aarch64. I assume most of others are on x86_64.

1 Like

Heya, thanks for giving out details. I’ve relayed this to the team, we’ll keep you posted.

2 Likes

Heya @siko1056, just want to let you know after discussing with our infrastructure team here’s what we’d like to suggest:

  • we’ll create a large x86 VM (.5TB disc, 8 vCPU, 8GB RAM)
  • we’ll create a large budget (5) within our AARCH IaaS platform

You can use the AARCH budget to create a single large AARCH VM or several smaller ones.

Please confirm if this will help your project move forward.

2 Likes

Thanks for the amazing offer @sr229 !

Was it possible to get a separate smaller x86 machine

for the website related things? I might be old-fashioned, but I do not like the idea that sensible webhosting and “crash test Buildbots” work on the very same system :innocent:

Regarding the AARCH machines, I think we can start with one v1.large instance. Mostly regarding the SSD storage.

Again many thanks for your help @sr229 and FossHost :slightly_smiling_face:

1 Like

Yes, but the team deemed the capacity needs of Octave was great enough that we decided to give you single big VM for the webserver. In there you can consolidate even your package repos.

Awesome! I’ll let the team know. Once we have finalized everything, we will let you know via email of your access.

Once again, all the best!

2 Likes

Hi. In preparation for next week’s Octave developer meeting, I’m touching base on the status for this activity. Did one of the Octave devs (@siko1056?) get this email access?

Not yet, I am still looking forward to the great offer by @sr229 and FossHost :slightly_smiling_face:

I have allocated and deployed resources for you. @arungiridhar has the information on the ticket.

2 Likes

Thank you so much to @peer and Fosshub for their generous offer!

@siko1056: the temporary account password is with my email. I will reset the password and send you the details to set it up.

1 Like

I have done initial setup and shared with @siko1056

@siko1056 : please review Code of Conduct - Fosshost Docs. Could you add the Fosshost logo to the github pages as specified pls?

1 Like

Glad i could help - i dumped in some extra resources for static data, mirrors etc.

if you do use mirrors - let us know, we have a mirrors product you might be interested in.

2 Likes

After resolving a few minor hiccups (thanks @peer!), the VPS is now up and running as octave-fosshost.octave.org fosshost.octave.org. It is running a webserver and sshd but no content yet, and it won’t be world-readable until we’re ready. Hopefully by the time of the next dev meeting!

1 Like