Puppy Package Site - Planning Stages

News, happenings
Message
Author
User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#31 Post by Q5sys »

01micko wrote:However, interface is one thing, the underlying engine is the key and dependency checking is the bug bear IMO. This is more the packager's responsibility the way petget is structured at the moment. Maybe dir2pet needs to be more rigorous in checking dependencies and all of them listed in the dependency field.

And what about sfs management? Should it be part of PPM or separate?
While I do think SFS's should be in the PPM... at the same time I dont. Pets and SFSs are different beasts entirely. Not to complicate things... but I think perhaps SFS's should be in a seperate system. Not only is their use different, but their purpose was to help make puppy somewhat modular. Of course now that we have SFS load on the fly the line between pets and SFS is becoming blurred

I agree that the packager should list dependencies. I guess we need to come up with some agreed convention on how thats should be accomplished. Shouldnt be too hard to have an additional script set in dir2pet or whatever else to run ldd and then sed the output into a txt file (perhaps a .pd for pet dependency)
again... the social problem of getting everyone on board.

User avatar
jemimah
Posts: 4307
Joined: Wed 26 Aug 2009, 19:56
Location: Tampa, FL
Contact:

#32 Post by jemimah »

Dependencies wouldn't be much of a problem if we had a central repo and a proper package maintenance team. All we actually need is someone to test whether or not a given package installs properly and fix it if it doesn't. A complex packaging database scheme is not needed.

It's the proliferation of puppy flavors that largely prevents the devs from teaming up on repo maintenance, leaving each dev with the impossible job of maintaining a repo by him or herself.

What can we do to encourage teamwork rather than forking? The read-only architecture of puppy, the flexibility of woof, the desirability of packages that are custom-compiled, and the ease of remastering all make forking easy and working together difficult.

The only person who commands enough respect around here to unify puppy development is Barry - and that's definitely not his style.

scsijon
Posts: 1596
Joined: Thu 24 May 2007, 03:59
Location: the australian mallee
Contact:

#33 Post by scsijon »

jemimah wrote:Dependencies wouldn't be much of a problem if we had a central repo and a proper package maintenance team. All we actually need is someone to test whether or not a given package installs properly and fix it if it doesn't. A complex packaging database scheme is not needed.

It's the proliferation of puppy flavors that largely prevents the devs from teaming up on repo maintenance, leaving each dev with the impossible job of maintaining a repo by him or herself.

What can we do to encourage teamwork rather than forking? The read-only architecture of puppy, the flexibility of woof, the desirability of packages that are custom-compiled, and the ease of remastering all make forking easy and working together difficult.

The only person who commands enough respect around here to unify puppy development is Barry - and that's definitely not his style.
Maybe, what we need is to go back to the basic concept of puppy and change it slightly (as barry did in 2.12, you are doing with saluki, stu and others, including myself, have done from time to time with various builds) and separate the Basic or Core component of Puppy from the User Applications part of Puppy, but now on a Permanent Basis.

It would mean those that were interested in the base/core part, such as BarryK is with the ARM at the moment, could concentrate on that component of Puppy, knowing that others are keeping the User side in hand and up to a reasonable standard.

Those that wanted either a General or Specific User-set Version could concentrate on that component, and even others that wanted puppy for things like firewalls, various server-types, etc., could customise a User component, while knowing the Base/Core Puppy was being looked after by those that knew that part of it.

I 'spoke', some time ago in another topic, of having multiple .sfs files matching the directory structure autoload if they existed. I wonder if, with all the work that has and is happening with sfs access and with the number of sfs's being increased and working properly afterwards, if it's time to consider this again.

It would mean that someone, and we have quite a few music/multimedia members, could form a Team and work on this specific topic with the output being a broad sfs for general release PLUS specific sfs's and Pets to cover specific function sets, such as what a Music Studio or a Band-on-the-road would need. They would be expected as part of their function, to test, update and handle problems with their group of apps, and it could be that for a particular Application, the message was that it wasn't "certified" for a particular Base other than version nnnnnn-xx.xx.x-ippp.pet! Each Package would need / must have an entry in the Additional Software Sections and all versions be in the ONE entry!

The same could be done with the Business world and Games worlds'. Graphics and Documentary are also others that come to mind that would improve greatly by this structure. Internet may be a problem, but it just means the Team would have to be a bit more 'robust' in structure.
Things like Networking, Utility, Desktop, System, Setup and Filesystem I think ?would, ?maybe at least in part, need to stay with the Base/Core Teams. Maybe these need a Single Specialist Team instead, made up of a rep from each 'certified' Puppy in the current dataset!

However on re-reading, i'm not sure if I've got too off-topic in all this, but I believe the basics of it need to be considered, at least in general, if the storage site is not to end up having to be totally re-worked every six-months / year or two and that would loose support in the userbase as you would get 'it's too hard to do anything with the site as it's always changing' messages appearing everywhere!

anyway, another five aussie cents worth

regards to all
scsijon
Afterthought: maybe it will need to 'pseudo' mirror the appropriate forum sections to work properly! OUCH!

User avatar
ecube
Posts: 88
Joined: Fri 11 Jul 2008, 17:00
Location: Västerås, Sweden

Repo maintainer offer

#34 Post by ecube »

jemimah wrote: I think to start, we really need to figure out who is interested in being a repo maintainer
Jemimah,

I have some spare time and I am willing to help.

Available hardware consists of six PCs of varying age and three printers.

How do I proceed?
:D
ecube

User avatar
jemimah
Posts: 4307
Joined: Wed 26 Aug 2009, 19:56
Location: Tampa, FL
Contact:

#35 Post by jemimah »

scsijon wrote: Those that wanted either a General or Specific User-set Version could concentrate on that component, and even others that wanted puppy for things like firewalls, various server-types, etc., could customise a User component, while knowing the Base/Core Puppy was being looked after by those that knew that part of it.
It's pretty difficult to separate the core from the user component as I am finding out. Puppy is very tightly integrated and implemented in a way that makes a lot of assumptions. Getting xfce working smoothly has required dozens of patches to core puppy scripts.

Saluki attempts to separate the base and the applications. It has one sfs - the adrive- that holds all the apps and autoloads if it exists. The adrive can be rebuilt quickly and easily, without woof. This is my attempt to make collaboration more possible.

Currently, there is no facility to tell the ppm what dependencies an installed SFS provides. This will have to be added if the adrive idea is going to work correctly with the ppm.

I don't think it should be split into more sfses than one adrive because a lot of the apps share libraries. Like I can use gstreamer for the browser and all the multimedia apps, the cd burner, the chat client, and the presentation viewer, and a maybe other stuff. However someone else may want to build around mplayer and ffmpeg or xine, so they can just dump the whole adrive and make their own without the big libs that they don't want.

It'll takes some time to see if the concept catches on.

User avatar
jemimah
Posts: 4307
Joined: Wed 26 Aug 2009, 19:56
Location: Tampa, FL
Contact:

Re: Repo maintainer offer

#36 Post by jemimah »

ecube wrote:
jemimah wrote: I think to start, we really need to figure out who is interested in being a repo maintainer
Jemimah,

I have some spare time and I am willing to help.

Available hardware consists of six PCs of varying age and three printers.

How do I proceed?
:D
ecube
I can give you upload access to the saluki repo. However, it's best if you wait until I get the saluki base nailed down first because some changes I make require a lot of things to be repackaged and I don't want anyone to have to redo their work. I'll PM you the password in a couple weeks when it's ready. In the meantime feel free to provide feedback about saluki - I consider feedback from contributors the most useful.

User avatar
Aitch
Posts: 6518
Joined: Wed 04 Apr 2007, 15:57
Location: Chatham, Kent, UK

#37 Post by Aitch »

Currently, there is no facility to tell the ppm what dependencies an installed SFS provides
Jemimah

I posted a couple of suggestions in the Saluki thread - I don't know if they will fit with your adrive concept?

Good move though... :D

Aitch :)

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#38 Post by Q5sys »

jemimah wrote:It's pretty difficult to separate the core from the user component as I am finding out. Puppy is very tightly integrated and implemented in a way that makes a lot of assumptions. Getting xfce working smoothly has required dozens of patches to core puppy scripts.

Saluki attempts to separate the base and the applications. It has one sfs - the adrive- that holds all the apps and autoloads if it exists. The adrive can be rebuilt quickly and easily, without woof. This is my attempt to make collaboration more possible.

It'll takes some time to see if the concept catches on.
Amen to that one. I've been beating my head against a wall now for quite a while trying to get gnome3 to run without any problems. Its humbling when you start digging in and you realize how much you dont know about puppy. lol
When you are done with your xfce voyage, will you be posting the road map you used to get things to work? and which core scripts you had to tweak and how (just s simple description). Im sure that would be immensely helpful to a bunch of people.

Back on Package manager though, I was thinking more along the lines of something just for add-on programs at this point,. Sure later down the road it'd be nice to have everything, but I think if we try to do too much at once, the idea will crash and burn. I think if we start small, it may work.
I do not however want to detract anyone from their own work and what they are trying to dev just to work on this idea. We've been going for years without it, however i do hope that we could maybe get something up and running before 2020. lol

User avatar
jemimah
Posts: 4307
Joined: Wed 26 Aug 2009, 19:56
Location: Tampa, FL
Contact:

#39 Post by jemimah »

I've got a tarball of woof patches that I will post once I'm done tweaking them.

User avatar
jemimah
Posts: 4307
Joined: Wed 26 Aug 2009, 19:56
Location: Tampa, FL
Contact:

#40 Post by jemimah »

So I've been running the saluki repo with a few maintainers for a while now and I've been thinking about what would make life easier on the server side. (getting back to the original topic of this thread)

What would be awesome is if the server could maintain the Packages-*-official database. Ideally the the repo maintainers could upload or delete pets, and some process on the server would notice the changes and update the DB.

The second thing would be a way to have multiple accounts access the same repo. Each maintainer would have their own account so they could be cut off easily if the account is compromised. All maintainers could upload to the repo, but I think only one admin account should be able to delete or overwrite other people's uploads.

User avatar
smokey01
Posts: 2813
Joined: Sat 30 Dec 2006, 23:15
Location: South Australia :-(
Contact:

#41 Post by smokey01 »

Jemimah if you provide the details in a PM I can set it up for you.

Names etc.

ocpaul20
Posts: 260
Joined: Thu 31 Jan 2008, 08:00
Location: PRC

#42 Post by ocpaul20 »

Did anything happen to this?

I saw somewhere that someone suggested a torrent method of downloading packages.

We could easily create a torrent cache kind-of site where we kept pointers to where the packages were being kept and maybe a small (or large) XML file with details of package, who maintains it, where the package pet or sfs or iso is located and such like.

That way the XML files could be ordered into distros, packages, etc and the packages themselves could be spread out all over the internet. They could be on individual's hosted servers if necessary.

I think the problem of too many downloads from one site needs to be spread out so that no one site gets hit with all the users requesting packages.

If this idea was adopted, then we just need to decide what information is required in the XML file and write a fairly simple application to sort it in different order.

Each night a script could go through and check on the availability of each server and each time a package was requested if it is not available then the link could turn red or yellow for example so that others knew it was down.

I think Torrents are the way forward but there is no easy small application which runs in the background and notifies the user when the package is ready for installation.

PPM would need to start automatically perhaps once the md5 had been checked and download was complete and a check on shutdown to say that there was still a download in progress or stop the torrent download until next boot.
==================
Running DebianDog Jessie Frugal with /live and maybe with changes or savefile or.., who knows?

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#43 Post by Q5sys »

ocpaul20 wrote:Did anything happen to this?
No one was really very interested in working on it.
ocpaul20 wrote:I think Torrents are the way forward but there is no easy small application which runs in the background and notifies the user when the package is ready for installation.
Torrents are only good in situations where you have alot of people who wnat the same file and are willing to share it for a long period of time.

Within a few months most torrents will die and we'd be left with one server hosting everything, which is no different than just hosting it there anyway.

The number of people that are going to be actively downloading random_package.pet at the same time is going to be very very low, so there is no benefit to torrents. I have a friend who runs a +100gbit CDN and will give me an extremely low rate, but I need help with the site design to make it viable.

I'm not a web developer, so I need help from someone who is to help build up the site. Hosting the packages itself is the easy part.

ocpaul20
Posts: 260
Joined: Thu 31 Jan 2008, 08:00
Location: PRC

#44 Post by ocpaul20 »

As I understand it, the beauty of peer-2-peer is that
a) you can stop downloads and pick them up later - since the downloads come in many "containers" of information.

b) the downloaded "containers" are shared amongst the peers while the peers are leeching which spreads the load (since the "containers" dont HAVE to come from the original download source seeder)

c) not sure how trackers come into it yet.

========================
OK, so if p2P is not the answer, then maybe the small XML files could be the component which allows people to search and select for the package they want.

We would have to define a format which would provide us with fields for all the information we needed.

In my mind, the difficulty would be getting the package uploader to fill in the required fields so that as a whole, it made sense when it was combined with all the other entries which other people had made for their uploaded packages.
==================
Running DebianDog Jessie Frugal with /live and maybe with changes or savefile or.., who knows?

User avatar
Q5sys
Posts: 1105
Joined: Thu 11 Dec 2008, 19:49
Contact:

#45 Post by Q5sys »

ocpaul20 wrote:As I understand it, the beauty of peer-2-peer is that
a) you can stop downloads and pick them up later - since the downloads come in many "containers" of information.
This is true, and would be helpful for distributing ISOs, its really doesnt make sense for distributing 10mb packages. How often do you really need to pause and resume a 10mb download?
ocpaul20 wrote:b) the downloaded "containers" are shared amongst the peers while the peers are leeching which spreads the load (since the "containers" dont HAVE to come from the original download source seeder)
Again I doubt you're going to have many people all downloading the same package at the same time... so the load isnt going to be spread around. And most people arent going to continue to 'seed' the package one they get it and install it. Remember Puppy for the most part is focusing on people with minimal hardware, they arent going to want a torrent client running on their machine all day long.
ocpaul20 wrote:c) not sure how trackers come into it yet.
Well we'd have to run our own tracker most likely. It's not hard, i just dont think its the best option.

Also keep in mind some ISPs like to throttle torrent traffic. So it means some people will get slower rates than just downloading directly through HTTP/S and/or FTP.
ocpaul20 wrote:OK, so if p2P is not the answer, then maybe the small XML files could be the component which allows people to search and select for the package they want.

We would have to define a format which would provide us with fields for all the information we needed.

In my mind, the difficulty would be getting the package uploader to fill in the required fields so that as a whole, it made sense when it was combined with all the other entries which other people had made for their uploaded packages.
Client side:

I think the way TazOC handled this in LightHousePup is probably a good way to go. Because it leaves the ability for people to come along later and tweak it eaiser for their own use if they want. As much as I like 'standards', I also like making things so people can go their own way.

Server side:

You simply have a form that someone needs to fill out for their package to be made public and visible to everyone. If they cant be bothered to fill out the form... then I guess they dont really want people to have whatever.


What I think makes the most sense:

I have long thought that the way that Arch linux handles the information for AUR packages is the simpliest and best way. http://aur.archlinux.org Also since its all web based, there is no need for a 'package search' on the client side. They have a web browser, why try to write something custom to run on the users system. This also means that the system can be used across various puppy bases.
Using the AUR site as an example, all we would need to do is add in a field for release version (slacko 5.7, Tahr, etc) so people only search for packages that will work on their system.

This way everything is in one central place, everyone can share their packages in one place, bugs, feedback, etc can all be in the same place instead of scattered around on different sites like we currently have (forum, wiki, random site where package is stored).

All of this is stuff I've wanted to set up before, but of the people I've talked to... no one else really seems keen on using it. I'm not going to put time and money into building something for everyone in the community to ignore it.

User avatar
ally
Posts: 1957
Joined: Sat 19 May 2012, 19:29
Location: lincoln, uk
Contact:

#46 Post by ally »

he files I upload to archive.org can be downloaded as torrents

:)

Post Reply