I am troubleshooting this mysterious networking issue and I was looking for way that you can overwrite tcp packets information to send it to the wrong place. I found this link for a tool that can do some of that. It cannot be used on live traffic as we are experiencing it but it is very interesting so I did not want to lose the information about this tcprewrite tool.
I was reading this article about code coverage and found it quite interesting. I have certainly seen the water cooler back patting attitude and have heard a few people worried about that attitude.
The main thing that I will take out of this article is that code coverage shows you what you are not covering. It won’t tell you how well you are covering.
Laziness is quite the creative motivator sometimes.
So I have this application that accepts syslog messages and process them to generate daily stats.
The glitch with the current way the code works is that it has hard coded separator for the messages. Just one hard coded separator. It may have worked somewhat fine for a period of time but since I have been monitoring the situation I can say that it is dropping quite a few messages every day because of that limitation.
While looking at the code to determine the scope of the change I was getting less enthusiastic to address the issue as I was discovering more and more places with the hard coded value.
I put that aside and found something more fun to do for a few moments.
Then I started to get motivated by laziness…
So the current idea is to parse the syslog messages that it gets and replace the different seperator by the one that is hard coded. This requires the least amount of energy to get it fixed. What I am not sure is how much technical debt am I accumulating with this idea.
I am reading these articles about cloud infrastructure and how much ondemand power you can get for your application and batch processing. So easy and convenient.
Does this raw power at pennies make the programmers less efficient? Lazy?
Does it matter?
I will answer yes to both questions. I think that it makes programmers lazy because you don’t have to face the consequences of your laziness. What does it matter that it takes 10% more time because you did not optimize every line? 50%? 90% when the cost is a couple of dollars a day?
Yes it does matter. I inherited an application that the previous programmer did a very good job at putting up and maintaining while he was with us. Now that I took over I have a hard time seeing all the moving parts but as I have to fix and change things I can see wholes in certain places. Lately I have found the code that produces the last 6 months quick view of performance. It goes through the last 6 months of data and does a daily average. Every day. So the code needs the last 6 months of data to be present on the server to go through it over and over to produce a single file that has a 2 lines difference from the previous day. I takes 4 to 5 hours each day. Can it get any funnier? The management answer to this issue was to throw more disk space at the issue since the batch processing is done at night and does not impact the system. Hard disks are cheap.
Imagine this application in the cloud and management asks for a 12 months history view. Everyone would have said no problem we need to provision an other 300 to 400 Gb of space and change 1 configuration file. The cost is minimal.
I personally want to address it in the code and get this optimized. Do you think I am getting any time dedicated to this issue? I am too costly for that option to make sense.
Because I am on physical hardware in our own data center it does matter and it will get addressed by fixing the application. We are restricted since we can’t had any drive to the server. If this application was in the cloud it would not get any attention?
I can see both sides of the coin and the reasons for the business to make the choice it would but I still think that fixing the application is what need to be done. The technical debt that this has cannot simply be ignored.
It took me over an hour to figure out why it was failing like this:
An error has occurred:
See /var/log/up2date for more information
In the log you would get this explaination:
[Tue Mar 23 11:30:43 2010] rhn_register There was an error while reading the hardware info from the bios. Traceback:
[Tue Mar 23 11:30:43 2010] rhn_register
Traceback (most recent call last):
File “/usr/share/rhn/up2date_client/tui.py”, line 1510, in _activate_hardware
hardwareInfo = hardware.get_hal_system_and_smbios()
File “/usr/share/rhn/up2date_client/hardware.py”, line 863, in get_hal_system_and_smbios
props = computer.GetAllProperties()
File “/usr/lib64/python2.4/site-packages/dbus/proxies.py”, line 25, in __call__
ret = self._proxy_method (*args, **keywords)
File “/usr/lib64/python2.4/site-packages/dbus/proxies.py”, line 102, in __call__
reply_message = self._connection.send_with_reply_and_block(message, timeout)
File “dbus_bindings.pyx”, line 455, in dbus_bindings.Connection.send_with_reply_and_block
dbus_bindings.DBusException: The name org.freedesktop.Hal was not provided by any .service files
[Tue Mar 23 11:30:43 2010] rhn_register
Traceback (most recent call last):
File “/usr/sbin/rhn_register”, line 82, in ?
File “/usr/share/rhn/up2date_client/rhncli.py”, line 65, in run
sys.exit(self.main() or 0)
File “/usr/sbin/rhn_register”, line 64, in main
File “/usr/share/rhn/up2date_client/tui.py”, line 1721, in main
File “/usr/share/rhn/up2date_client/tui.py”, line 1608, in run
if self._show_subscription_window() == False:
File “/usr/share/rhn/up2date_client/tui.py”, line 1562, in _show_subscription_window
File “/usr/share/rhn/up2date_client/rhnreg.py”, line 588, in getRemainingSubscriptions
File “/usr/share/rhn/up2date_client/rhnserver.py”, line 50, in __call__
return rpcServer.doCall(method, *args, **kwargs)
File “/usr/share/rhn/up2date_client/rpcServer.py”, line 199, in doCall
ret = method(*args, **kwargs)
File “/usr/lib64/python2.4/xmlrpclib.py”, line 1096, in __call__
return self.__send(self.__name, args)
File “/usr/share/rhn/up2date_client/rpcServer.py”, line 38, in _request1
ret = self._request(methodname, params)
File “/usr/lib/python2.4/site-packages/rhn/rpclib.py”, line 314, in _request
request = self._req_body(params, methodname)
File “/usr/lib/python2.4/site-packages/rhn/rpclib.py”, line 222, in _req_body
return xmlrpclib.dumps(params, methodname, encoding=self._encoding)
File “/usr/lib64/python2.4/xmlrpclib.py”, line 1029, in dumps
data = m.dumps(params)
File “/usr/lib64/python2.4/xmlrpclib.py”, line 603, in dumps
File “/usr/lib64/python2.4/xmlrpclib.py”, line 615, in __dump
f(self, value, write)
File “/usr/lib64/python2.4/xmlrpclib.py”, line 619, in dump_nil
raise TypeError, “cannot marshal None unless allow_none is enabled”
exceptions.TypeError: cannot marshal None unless allow_none is enabled
I also tried the rhnreg_ks utility with the –nohardware argument and that is when it gave me more information about the error:
[Tue Mar 23 15:18:39 2010] up2date Warning: haldaemon or messagebus service not running. Cannot probe hardware and DMI information.
I started the haldaemon process and I could then rhn_register my host.
Why did we disable the haldaemon process? It is part of the CIS recommendation to disable that startup service.
You learn the side effect of security as you go try things.
I was reading these different articles on how to install Maven on Mac OSX and I had to create the /usr/local directory, check the md5sum, untar and then modify my profile.
All this to realize that Maven is already installed on my Mac by default. It is version 2.2.0 and not the latest 2.2.1 but for the book I am reading this is perfect.
A little bit of testing before starting to do all sort of crazy things would have simplified my life.
I installed the m2eclipse plugin in my Eclipse so I can get ready to see how it simplifies my life by not having to write the pom.xml.
When you look at the synergy+ page it does say that it is beta and mostly stable in most cases so all the warnings are there to inform you that it may not work perfectly. I have no problem with them but I do with CentOS 5. When I did the upgrades this morning it upgraded my synergy to synergy+ and caused me all sorts of issue for most of the morning. CentOS on the workstation in question is installed to be a stable system and not a system that installs beta software. If I want cutting and bleeding edge I will play with Fedora.
After reverting everything back to synergy I am back to a productive environment.