We are planning to switch from a standard phone line to Nodephone VoIP
I am very keen to test if Asterisk (maybe in combination with FreePBX)
is an option for a small team (one trunk, 6 to 8 endpoints, all with
various SIP clients).
I successfully set up Asterisk 13 and pjsip and FreePBX 13 on a pretty
standard Debian 8 box. I suspect the challenges start when it comes to
the VoIP SIP details for the trunk.
I came across a few forum posts and discussions about the Internode
settings, but I wonder if anyone from the LUV community runs such a
setup or has some experience with Asterisk and Internode (especially
their Nodephone product)?
As I pointed out above, we don't have the line yet, so it's more
curiosity than a specific problem I want to solve (yet) :-)
Any feedback is welcome.
Good morning good people,
When I mount my samsung phone in debian linux, which directory (starting
from root) can I access the files on it from the command line? I'm
trying to find and copy a mms video file off it.
Sorry if that's worded awkwardly, hope it makes sense.
Best regards to all,
I've been collecting hardware for Wooranna Park Primary (which runs a Coder Dojo as described in a LUV lecture last year). They are after things that the kids can fix up and learn from. PCs and Laptops that they can install Linux on and random hardware they can play with.
If you have stuff that may be suitable and can bring it to a LUV meeting let me know off list.
Sent from my Nexus 6P with K-9 Mail.
We're running a functional-programming event, Compose Melbourne,
on Monday and Tuesday this coming week at RMIT in the City:
Since it's a local, community-based technical event, and there
are strong links between functional programming and FOSS, I
thought it might be of interest to LUV members.
Day 1, Monday, is a conference with speakers and paid
Day 2, Tuesday, we'll be running free workshops, including one
on Haskell for Beginners. While the workshops are free, you
still need to register, and possibly install software on your
laptop ahead of time. More generally, Tuesday is a free
unconference, a chance for people to get together informally and
share their interest in functional programming.
So, if you'd like to learn more about functional programming,
and meet people also interested in functional programming, then
come along to Compose Melbourne.
Sorry for short notice on this.
— Smiles, Les.
"On August 25, 1991, an obscure student in Finland named Linus Benedict
Torvalds posted a message to the comp.os.minix Usenet newsgroup saying
that he was working on a free operating system as a project to learn
about the x86 architecture. He cannot possibly have known that he was
launching a project that would change the computing industry in
fundamental ways. Twenty-five years later, it is fair to say that none
of us foresaw where Linux would go — a lesson that should be taken to
heart when trying to imagine where it might go from here."
And thanks to LWN.net for their consistently excellent coverage.
Share and enjoy,
Friday random semi-serious question;
Like many people I have too much data some work, some create some
family, some other...etc. I have some on some backup USB drives that I
have never gone back to and also a few unplugged HDD's in the tower (I
can't remember even why).
I don't really want to go and buy something (or pay for data of cloud
storage) but I thought I might do something with the stuff I have lying
around. Does anyone know if this is possible:
- Set up a Raspberry Pi server which exposes a single file system
- Link it to the USB HDD's of different sizes that I have lying around
- I save my data to it for backups
- The data is placed redundantly over the drives so it can recover from
one [or more] dying
- I can remove one of the drives and access the data on it directly (so
it is FAT32 not Linux RAID)
Any ideas? GFS?
I am working on a query that I have used both joins and sub queries to
try to make into something efficient. I have used the EXPLAIN command to
make sure I am taking advantage of indexes and I have added them where
they are lacking.
The structure of the query looks like this:
People_table <--- where the rows I want to show on the screen come from
(name, address, etc)
- Notes <--- One Person may have many notes, using date range search,
also using INT search, and this is using FULLTEXT IN BOOLEAN MODE searching
- Profile <--- This is a single text column, this is using FULLTEXT IN
BOOLEAN MODE searching
- People <--- Filtering by state, country, status, other ENUM / INT flags
I have tried (JOINS):
People Criteria AND Notes Criteria AND Profile Criteria
And tried (SUB QUERIES)
WHERE People.id IN (SELECT P.id FROM NOTES WHERE Notes Criteria
AND Pid IN (SELECT P.id FROM Profile WHERE Profile Criteria
AND P.id IN (SELECT P.id FROM People WHERE People Criteria)
So far I have killed the process after a few minutes as it just takes
too long. I noticed temporary tables were being created so I enlarge the
join buffer pool in my.cnf which cleared some of the feedback from the
(...and tried a few others...).
Then there is this:
- I can do the 3 queries in PHP independently and then use this to get a
list of ID's at the application layer to send a 4th query to get the
results and it only takes a few seconds. A FEW SECONDS!
- My gut tells me that this should be most efficiently handled at the
- How can 3 sub queries take more time than 3 independent queries?
- The "People Criteria" is all ENUMS and INT's which are indexed - I
though by making this the inner most subquery that it would filter out
the most rows before the slower FULLTEXT queries kicked in - is this
- JOINS vs SUBQUERIES - is there are "rule"? Does this "rule" get pushed
aside when real world testing proves the user experience at the
application layer is better by combining multiple SQL quires as opposed
to handing it off to the MySQL server?
Thanks in advance.