Updated API

Sorry to be repetitive, but is there a way to have a single .zip or tar.gz with the API docs?? Or does “genpycode -m” still generate the docs?? Sorry for asking, but at the present I don’t have 1.1.

I just got the generator working yesterday, and it’s not really finished. Give it another two or three days.

Meanwhile, we need to find a webspider program that can extract the manual from the wiki. Anybody know anything?

For the Mozilla series I’d recommend the SpiderZilla extension (spiderzilla.mozdev.org/). It’s basically a httrack frontend (httrack is similar to wget).

I’ve written a short program for parsing with HTTRACK downloaded manual.
look at https://discourse.panda3d.org/viewtopic.php?t=554 there is a .chm of my work

Martin

When marc mention “wget,” I slapped my forehead. Why didn’t I think of that? Duh. I just made the website a lot more wget-friendly. In particular, I concentrated on the wiki. Try this:

wget -r -k -Iwiki/index.php,wiki/images,nimages,stylesheets panda3d.org/manual/index.php/

I haven’t really tested it that carefully, but it seems to download a passable copy of the manual to one’s hard disk. The API reference manual is another matter. I’ll have a solution for that soon enough, though.

So how do you install software on your server without wget, Josh? You’ll have to explain that to me one day… :wink:

For anyone not equipped with wget or a Mozilla-based browser I’m really sorry! :stuck_out_tongue: J/k, just post to ask if you want to download the manual and don’t know how to.

Josh, how about a daily cron job that creates something like “panda3dmanual-snapshot-datexyz.tar.gz”? That would be much easier for potential offline users without webspider software.

We could also think about setting up a mirror once panda3d.de is more progressed. I think it’s advisable at least for all the important docs and the CVS. That latest downtime still gives me a headache, it would be terrible to lose anything due to whichever accident one day…

When I say I made it more “wget-friendly,” I mean that the PHP code that runs the website now detects wget, and when it does, it strips off the “fluff” like the screenshots and the javascript menu. You end up with a nice tidy looking manual.

As for a daily wiki snapshot — absolutely, but remember, I’m just one guy.

Oh - we do off-site backups of the website. Of course, to be fair, I’ve never tested those backups.

We should probably talk about that in more detail by e-mail… I’ll write you as soon as I can.

(EDIT) …not now, because I’m about to go to bed. :slight_smile:

Marc: Short reply: Ever heard of w3m or Lynx or … ? :wink:
Uh, SSHing a tarball to the destination is also a possibility :wink:

Or just write the code yourself! :open_mouth:
:smiley:

Regards, Bigfoot29

Bigfoot: I prefer SQL-Dumps… :slight_smile: