On Thu, Nov 24, 2011 at 14:23, Andrew McNaughton <andrewmcnnz(a)gmail.com> wrote:
On 22/11/11 14:06, Marcus Furlong wrote:
On Thu, Nov 17, 2011 at 15:56, Peter Ross
<Peter.Ross(a)bogen.in-berlin.de> wrote:
Hi Markus,
in principle it works, you can put all stuff in one line and run it between
single quotes as a awk command line.
There is a little script doing this for you:
cat $my_awk_file | awk '{if (NR>1) for (i=1; i<=NF; i++) printf $i"
"}'
In general it works, however, you have to escape the single quotes _inside_
the script.
"'" has to be written as "'"'"'"
Well, that syntax hurts;-) especially if you want to write a awk line that
processes an awk script to get an awk line..
And than you get:
cat $my_awk_file | \
awk -F"'" '{for (i=1; i<NF; i++) printf
$i"'"'"'""\"""'"'"'""\"""'"'"'";
print $NF}' | \
awk '{if (NR>1) for (i=1; i<=NF; i++) printf $i" "}'
The appearing output as a command line to do what the awk script does:
cat $my_file | awk '{ if ($1=="Repo-name") {printf
"'"'"'"; for (i=3; i<NF;
i++) printf $i" "; printf $NF"'"'"' "} if
($1=="Repo-baseurl") { url=1;
comma=match($NF,","); if (comma) out=substr($NF,1,comma-1); else out=$NF;
printf "'"'"'"out"'"'"'
"; } else { if (url==1) { if ($1==":") {
comma=match($NF,","); if (comma) out=substr($NF,1,comma-1); else out=$NF;
printf "'"'"'"out"'"'"'
"; } else {url=0; print "";} } } }'
Well, that's all in one line now - but who can read that?
Agreed, it's a
bit unwieldy!
Feel free to use whatever you like;-)
Will
continue to investigate options, thanks again for your help, at
least I have something that works now :)
Another option is to use perl. sed, awk and bash are all useful tools
for simple tasks, but as the complexity of what you are doing rises,
they get unwieldy fast. Perl tends to scale better.
You're at least bordering on where perl would be cleaner if treated as a
text processing exercise. Alternatively, it looks like there's some
CPAN modules for binding to the RPM API directly, so you would have a
data structure to navigate rather than a text processing exercise.
If what you have is working, then I don't expect it's worth re-doing it,
but if you're building further on this, it might be worth thinking about.
Yes we were thinking about this, but for the client program, we were
trying to keep external dependencies to a minimum. Similar projects
(like pakiti) also try to keep dependencies on the client to a
minimum, but they don't deal with uploading information about repos,
only packages.
There are also python modules for yum, rpm, deb and apt, and from a
cursory glance at /usr/share/yum-cli/yumcommands.py it would seem
relatively easy to get the information from the yumrepo data
structures in there. We can also be sure that if yum/apt are installed
then the relevant python libraries are already installed on the
client. Given that the rest of the project is python this might be one
way of doing it.
Another way I was thinking of doing it was to upload the full output
of "yum repolist" directly to the server, and parse it server-side
using python. This would be ok for centos hosts, however for debian
hosts, the output is harder to decipher.
"apt-cache policy" on squeeze seems to give exactly what I want, but
on lenny, the repo urls are incomplete. It seems I would need to parse
"apt-cache policy" and combine it with the output of "apt-cache dump |
grep -A10 "^File"". The nice thing about using apt-cache policy is
that it gives the repo priorities, along with package priorities if
there are any (i.e. pinned packages), and having this information
would be great.
Currently we use "apt-get update --print-uris" but this loses
information about each repo, that could be used to determine if a
given repo is a mirror of another (e.g. release
v=6.0.3,o=Debian,a=stable,n=squeeze,l=Debian,c=main). It also (in
lenny/squeeze, fixed in wheezy) deletes the gpg files associated with
a repo if run as root. Grr.
Another alternative would be upload the /etc/apt/*.list and
/etc/yum.repos.d/*.repo files directly and again perform server side
parsing.
Plenty of options, not enough time to try them all! :)
Marcus.
--
Marcus Furlong