If you experience sudden disconnections immediately after connections with your Sennheiser MM400 headset in Linux, check out this entry within the Archlinux Wiki.
Recently I ran into the problem that after resuming from standby and changing my display configuration my desktop seemed to be no longer displayed.
After a bit of searching, I found out that the KDE desktop was still running but my Gnome desktop installed in parallel was started and was covering it. For this reason, I saw an empty desktop. The problem is that somehow nautilus is started when using a KDE session. Nautilus is not only a file manager but also starts the desktop component which then can interfere with KDE’s desktop.
In order to prevent nautilus from starting the desktop component, you can use gconf-editor to reconfigure nautilus. Start gconf-editor and navigate to apps/nautilus/preferences in its folder structure and uncheck “show desktop”. After that the desktop will not be automatically started when nautilus is started.
As I am only using nautilus within KDE and not Gnome as a window manager this is of no problem for me. If you also use Gnome in a session you probably have to check if your Gnome session still behaves correctly with the same user.
I have got plenty to do at the moment. So please excuse the lack of updates. The pipeline is quite full though ;-).
So, just one short thing I found out:
If you experience a strange slow down of KMail/Kontact 4.4.10 after a KDE upgrade in gentoo, reemerge kdepim related packages.
If you use eix you can easily find these packages by typing eix -I kdepim . I do not know if all packages are related but it does not hurt to rebuild them all after a kde update.
You can do this with one command like this:
emerge -1va `eix kdepim -I –only-names`
After that KMail/kontact worked again as expected.
A few days ago I finally got it working. I now can synchronize my calendar and addressbook from my Android mobile (with synthesis syncml) to egroupware 1.8 and access this synchronized information via web interface or directly in Linux via GroupDAV with KDE’s Kontact/Akonadi (Kaddressbook, Kalendar) applications.
It was a stony path, though. Installing egroupware, activating its SyncML interface and allowing my egroupware user to use it went flawlessly apart from the fact that a new synchronizing account on my mobile (created with synthesis) had to be named (both parts of he account) after the mail address of the user registered in egroupware.
Accessing the synchronized data in eGroupware-1.8.001.20101201 with Kontact, turned out to be a real challenge , however. Despite several howtos for the 1.6 version in the internet I could not get it working with version 1.8. The authentication of Kontact at egroupware always failed with my hosted cgi based PHP solution.
After hours of debugging and looking through the code I found out that, the base64 encoding of the authentication information is not decoded correctly which is why authentication credentials cannot be extracted and thus egroupware authentication fails.
I have discovered the issue with TortoiseSVN and a failing SSL handshake. After fiddling around for quite a long time, I found out, that the issue with the SSL handshake is not related to the SVN server but to the TortoiseSVN client version or in particular its linkage against different versions of Neon.
TortoiseSVN 1.6.11 links against Neon 0.29.4 whereas the working TortoiseSVN 1.6.10 links against Neon 0.29.3. The current development version of the 1.6. branch of TortoiseSVN also links against Neon 0.29.3 and works, too. Together with the fact that Neon announces “Fix GnuTLS handshakes failures with ‘TLS warning alert’ (Bryan Cain)” as a release note for Neon 0.29.5, I strongly suspect the linkage against the 0.29.4 version of Neon as the culprit for all the errors.
For all people using TortoiseSVN and experiencing these errors this means either to downgrade to TortoiseSVN 1.6.10 or to use the latest development version of the 1.6 branch until the issue is resolved.
Update: Bug is taken care of. See http://svn.haxx.se/tsvnusers/archive-2010-10/0340.shtml .
A few months ago, I came across the problem of creating a DOS-based bootable CD-ROM with custom data on it. I needed the image to upgrade an old mainboard BIOS. Sadly the manufacturer did not provide a bootable CD by himself. It took me quite a long time to get the image working. Either the image would not boot or my custom data was not available. If you ever find yourself in the same situation and you are using a Desktop Linux for creating the images, I might have a nice solution for you.
In the following, I will describe how to create a bootable DOS based ISO with custom data on it with Linux command line tools and K3B.
But first things first.
I will use the following programs/configurations in this post.
- Burning tool K3B – http://www.k3b.org/
- Loop device mounting in kernel:
-> Device Drivers
-> Block devices (BLK_DEV [=y])
-> Loopback device Support (CONFIG_BLK_DEV_LOOP)[/code]
- DOS Boot image drdos.img (drdosmin.zip from http://www.biosflash.com/e/bios-boot-cd.htm)
- Prepared K3B project file – http://www.phillme.de/dl/burn-dos-iso.k3b
- File structure: The K3B file provided assumes the following file structure:tmp
drdos.img # original/ extendable dos disk image
loop/ # mount point for extendable dos image
dosimg.iso # created iso image with K3B
Preparing the disk image
- Download DOS boot image drdosmin.zip (see above) and extract drdos.img to folder /tmp/dos-iso/img
- Prepare the DOS files you want to access out of the running dos
- [code lang=”bash”]#mount DOS image with loop interface
mount -o loop /tmp/dos-iso/img/drdos.img /tmp/dos-iso/loop/[/code]
- copy your DOS files to the mounted loop/ folder from the last step
- [code lang=”bash”]
#unmount loop file
umount /tmp/dos-iso/loop/ [/code]
Writing the extended ISO Image
- Start K3B
- Open prepared K3B project from http://www.phillme.de/dl/burn-dos-iso.k3b
- Check if correct boot image was selected (default from K3B project “/tmp/dos-iso/img/drdos.img”), otherwise
- Click paper with pencil icon
- Click “new…”
- Select extended drdos.img with files
- Leave other options untouched
- “Burn” iso image to a file for testing it before burning. Default location with K3B file should be “/tmp/dos-iso/dosimg.iso”
Testing the created boot image
- Install kvm or other virtualisation software
- Start iso from commandline [code lang=”bash”]kvm -cdrom /tmp/dos-iso/dosimg.iso[/code]
- In the kvm window you should see something like “Starting Caldera DR-DOS…”
- Then you should get a commandline asking for the date (just hit enter)
- After that you should get a line looking like “A:\>_”
- Type in “dir” hit enter and check if you see all files from the drdos.img prepared before
If everything works… Congratulations! You can now burn the CD from the ISO image, start your computer with it and access your copied files out of a running DOS.
If it does not work for you or if you have any other suggestions, drop me a comment on this post.
As already stated I do not want to reinvent the wheel at this point. There are many excellent tutorials like e.g. this one concerning the optimization of Apache. For this reason I only want to give a few short hints how I did the optimization in Gentoo. I only did a few and not all of the possible steps so be sure to read the mentioned tutorial (or others) carefully to identify the correct steps for you. In this post I will only mention software optimization which only brings small performance enhancements. If your server is seriously struggling under tons of requests, consider upgrading your hardware, too.
I did the following software related optimizations on Gentoo.
Cleaning the apache module list
Identifying the minimal module list for your configuration can be quite time consuming because often (at least in my case) you do not exactly know, which module provides which feature.
You can find a list of all modules and a link to their documentation in the Gentoo wiki. Personally I use php, url rewriting, logging, cgi, basic authentication, vhosts and subversion (and several other smaller features) and found the following list to be working for me. Insert your list into /etc/make.conf as the APACHE2_MODULES variable.
This is my configuration:
[code]APACHE2_MODULES=”actions alias asis auth_basic authn_alias authn_anon authn_default authn_file authz_default authz_user authz_host autoindex cgi dav deflate dir env expires filter headers log_config logio mime negotiation rewrite setenvif status unique_id vhost_alias dav_fs dav_lock dumpio ext_filter imagemap mime_magic”[/code]
For testing out needed modules, you do not need to recompile apache every time. Within a dynamic module configuration it is enough to edit the /etc/apache/httpd.conf, comment out the questionable modules, restart apache and determine if your applications still work as expected. But when you know which modules are needed, change the mentioned variable so that you do not have to compile unnecessary modules when you update apache.
After changing this variable you have to recompile and restart apache. Before you restart you should (if needed) update your config files because new modules may add new LoadModule definitions to httpd.conf. Be also sure to check for <IfDefine …> directives which only activate modules, if you added the value after IfDefine as an -D argument to /etc/conf.d/apache2.
Choosing the right MPM (Multi Processing Module)
When choosing the MPM I took the advice of the tutorial mentioned before and use the prefork module because our server only has two cores. If you want to save some compile time you can tell portage to only compile this MPM by setting APACHE2_MPMS=”prefork” in the make.conf. Like all other apache modules in Gentoo the worker can be configured in /etc/apache2/modules.d/ in 00_mpm.conf. In this file you can change essential parameters like the processes or threads apache should create on startup. You can see which MPMS are available and which parameters they use here.
Deactivating overrides and symbolic links
Deactivating overrides means deactivating .htaccess files in every directory which will be nearly impossible for hosting providers. But if you are the only one who updates the configuration or can manage to be the only one who does these updates, you can save significant processing time. The same applies to symbolic links. If you want to know how to do deactivate these features, look at the tutorial mentioned in the beginning (also for the deactivation of HostnameLookups).
If you want to optimize apache further consider using a cache. Apache itself has several options described in the Caching Guide. Apart from that you can use a caching reverse proxy as for example Varnish to cache and redirect your requests between different servers.
I hope this article and the further documentation gave a rough overview of tuning apache for sites with numerous requests and helps you saving hardware.
Comments and corrections are, as always, welcome
It took me hours to figure out that a certificate file/chain is not needed to use the 802.1x authentication with Linux at my university in Bamberg.
To use wicd as authentication client which again uses wpa_supplicant, you have to create a new wicd template however.
You can either read the original forum post here or continue reading.
First create the new file /etc/wicd/encryption/templates/peap-mschapv2 and insert the following content:
[code lang=bash]name = PEAP with MSCHAPv2
author = ElitestFX
version = 1
require identity *Identity password *Password
After that activate the template by adding the filename (peap-mschapv2 in our case) into a new line in /etc/wicd/encryption/templates/active .
Finally restart the wicd daemon
[code lang=bash]/etc/init.d/wicd restart[/code]
and choose the template PEAP with MSCHAPv2 in the wicd gui in the properties of the network. Insert your identity (eg. baxxxxxx) and password (your login password from the data center (RZ)) and connect successfully.
Yesterday I updated to wicd -1.7.0 and experienced issues when connecting to my local wired network. I found out that the issue only comes up when using wicd-gtk. Wicd-curses manages to get a connection.
So I searched the web and found out that this problem only occurs when having (dis)connection scripts set. The issue is already filed at wicd’s launchpad upstream with a working patch by Jonathan (Comment #17). As I use wicd every day, the fix for this issue is quite important to me.
Gentoo currently only has the 1.7.0 (1.6.2 worked flawlessly) version in its tree and so I made an ebuild which includes Jonathan’s patch, fixes the issue and can be found in gentoo’s bugzilla.
If you are experiencing the same issues in gentoo, check out the new ebuild and the patch.
My last post about server optimization dates February because the last weeks/months have been quite busy. I promised to continue with MySQL optimisation which I will be doing now. As with the other posts, I will only write down significant new information and otherwise link to the information in the web so that you can use this guide as a condensed view on the topic.
First of all visit documentation at mysql.com which contains a whole chapter about optimization. If you only want to
optimise the MySQL server
this subchapter will
suit your needs.
The MySQL Query Cache
The query cache, although not a panacea, can bring big performance improvements and is, at least with mysql 5.0, disabled by default. However you have to know that it only optimises queries. It cannot
“look” into your application and group all queries of the same request or similar.
The MySQL Performance Blog has a nice tutorial about configuration and background of the query cache.
Adapting MySQL Cache Sizes
MySQL has several caches which need to be adapted to your personal needs. If you are using PHPMyAdmin, it can give you hints about the cache sizes which need to be optimised in your current setup. Just open server_status.php which is linked on the start page as Show
MySQL runtime information. To change values edit your /etc/mysql/my.cnf and restart MySQL afterwards.
If you want to know how to view this information directly via SQL command, look at this optimisation guide underneath the subheading Getting information about current values or into the MySQL documentation. Both guides also lists other possible improvements.
Identifying slow queries
Optimising your web application or developing it with database performance in mind, normally should be the first step to do. Though it may often be the case that you cannot access the code of your application, have not the required skills, not enough time or that you do not want to change the standard application for easier updates. Problems in this area can often be avoided or at least minimized by using an object relational mapper like hibernate for Java as it optimises the queries on object level before executing them.
If you cannot use an ORM mapper for some reasons or if you want to know which queries use most resources you can activate the MySQL Slow Query Log. Pete Freitag directly states which entries are needed in the my.cnf file.
Optimising your application
After you identified slow queries or even before generating them, this blog post for PHP experts can help you finding database related performance hits in PHP with PHP.
As a general measure I strongly recommend using indexes for frequently used or searched attributes. Although from 2001 the following guide explains these topics and their background greatly.
Optimising the compilation
If you compile mysql by yourself, you can also get speed improvements by compiling MySQL with special
options. The MySQL
documentation lists several possibilities.
Optimising your Linux kernel for MySQL
Apart from MySQL
itself you can also optimise your kernel parameters (sysctl.conf) for
MySQL. If you want to, refer to this guide.
If this still is not enough you will find plenty of other resources in this forum post. Some of the links listed there were already mentioned before.
Before investing in new hardware be sure to check the configuration of your database and, when you can, the database queries in your application. Even if the latter is not possible, simple server side tuning can bring huge improvements especially when the query cache is not activated or it or other caches are too small so that MySQL hast to write more data to disk than necessary.
I hope this small guide/list of links gave you an overview over MySQL performance tuning. For me it will serve as an aid to memory and thus it will be expanded if future issues arise.