Z-Car

Should I hire a SEO consultant? – Search Engine Marketing in Maryland, DC, and Virginia

A common question that I get from many clients is should they invest in a SEO consultant?  In most cases, I encourage folks to save their money and focus on the standard SEO best practices.  But, if a client does not have the time, or a staff, then by all means hire someone to help out.  However, do NOT believe the hype.  There are very few magic bullets or tricks that will get you a legitimate high ranking that will not eventually get crushed by Google if the find you “cheating”.  Focus on White Hat SEO, focus on content, focus on your customers, and if you follow those rules, you will typically be rewarded by the search gods.

Don’t fall for paid link programs, make sure you understand what your SEO firm is doing, and check up on them.  Many practices that SEO firms will advocate will eventually cause problems due to Google eventually catching your cheating.  If you are a SEO SPAMer, it is no big deal, just change your domain, and start again.  However, if you are a reputable company, you are putting your Internet Reputation on the line.  If the SEO pitch is too good to be true, don’t believe it.  If you want to see how even big companies can get crushed, read about how JC Penny seriously harmed their Internet Reputation by not closely watching what their SEO consultant was doing.

Did you know that over 20% of all search queries are now being generated by mobile devices?  And, in many of these cases, the search results are heavily influenced by the customer’s location.  No matter what business you are in, it is even more important that your business is visible, and preferably at the top.  And in areas like the Maryland, DC, Virginia region, optimizing your site to reflect where customers could be searching from is important.   You can’t just provide your business address and hope customers can guess what areas you may service.  Make it clear where your service areas are.

And, while talking about your address, a couple of tips are in order.  First, make your address and phone number easy to read, and to copy and paste.  Do not embed your address in a graphic.  Yes, I have seen this, and if I had known the designer that did it I would have fired them.   Interestingly, the address for this company was Thomas Johnson Drive in Frederick, Maryland.  I tried to copy and paste, but no luck.  Guess how I typed it into Google?  That is right, Thomas JEFFERSON!  Mobile search fail…

If you are still interested in finding some help with your SEO projects, drop us an email.  We would be happy to help you find the proper resource.


Linux Question of the Day – How do I Grep Recursively?

redheaded lady using laptop

Another common question that I hear on a weekly basis. You would think this would be a pretty straightforward answer, and it is. I think the shear number of options available with Grep is what confuses folks.  So, here is the basic way to perform this task.

grep -r “texthere” .

Simple, right?  “texthere” is the string that you are searching for, and the -r says search recursively starting from the current directory (.).  You can also specify specific filenames or types that you would like to search, such as *.txt, *.php, etc.

On some older Unix versions, you may find that Grep does not support the -r syntax.  In that case, try the following :

find ./ -type f | xargs grep “texthere”

Also som version also will not support searching for *.txt as the filename, in that case, try the following :

find /dir/to/search/ -iname *.txt -exec grep ‘texthere’ ‘{}’ ;

Little known piece of trivia, GREP stands for Get Regular Expression and Print


PHP Code to Open Zip Files and Extract the Contents

computerGirl on sofa

I recently was working on a project for a dynamic website that presented a huge collection of files available for download. All of the files (over 25,000) were stored in the ubiquitous ZIP format.  This is great for reducing the amount of disk space required, however it can make it challenging to work with them.  What we wanted to do was allow the visitors to the site to be able to review the contents of the file, and view any of the files that are contained within the ZIP file.  PHP has some handy functions that allow you to manipulate ZIP files, however they are not well documented.  Although fairly straight-forward, I am including some example code here that will allow you to quickly copy and paste it for your requirements.

The core routines include zip_open() that opens the ZIP file for use.  zip_read then allows you to transverse the ZIP directory to identify the files contained within.   The zip_entry routines provide additional details about the file enclosed, and zip_entry_open() provides the door that allows you to open a specific file and then zip_entry_read allows you to extract the contained file.   On our site, the contained files were mostly text files that could be easily displayed.  Simply iterate through the file collecting the contents into a temporary variable.  If displaying on a webpage, use the handy nl2lbr() function to convert line feeds into HTML line breaks.

All in all, PHP’s built-in ZIP functions are a handy tool, a few simple lines of code will allow you to easily manipulate ZIP files and allow individual files to be viewed.  Leave a comment if you have any questions or suggestions.  You can see the final site at The Programmer’s Corner.

Code that will allow you to open a ZIP file, iterate through the directory, and retrieve the files contained within.

   $zip = zip_open('PCorner/' . $_GET['category'] . '/' . $_GET['file']);
    while ($zip_entry = zip_read($zip)) {
        $output.= '<tr class="' . $class . '">';
        $output.= '<td>';
        $file = strtoupper(basename(zip_entry_name($zip_entry)));
        $size = zip_entry_filesize($zip_entry);
        $csize = zip_entry_compressedsize($zip_entry);
        $type = zip_entry_compressionmethod($zip_entry);
    }

Code that will allow you to open a ZIP file, find a specific file, and extract it for viewing, or further manipulation.

   $zip = zip_open('PCorner/' . $_GET['category'] . '/' . $_GET['file']);
        while ($zip_entry = zip_read($zip)) {
            $file = basename(zip_entry_name($zip_entry));
            if (strtoupper($file) == strtoupper($_GET['operation'])) {
                if (!zip_entry_open($zip, $zip_entry)) {
                    die('');
                }
                while ($data = zip_entry_read($zip_entry)) {
                    $output.= nl2br($data);
                }

            }
        }
   echo $output;

EXTJS – JsonWriter not respecting DateFormat used with JsonReader

computer_girl

Recently while working on a project that used Extjs as the front-end to a MySQL application, we came across an interesting issue with Extjs’s JsonStore.  The JsonStore automatically creates a Jsonreader that is used to map data coming from the MySQL application to the Extjs front-end.  The JsonReader has an optional dateFormat config property which allows the format of the incoming data to be read properly using the format delivered from MySQL.  While using the JsonWriter to update records from a EditorGrid, we noticed that the format that was sent back from the JsonWriter was using Extjs’s default dateFormat (03-04-2011T00:00:00).

It was that the JsonWriter was not respecting the dateFormat defined when mapping the data using JsonReader.  There would appear to be multiple hacks that could be used to fix, such as adding a Listener before the update that would modify the datefield, however we came across this little bit of code that works wonders.  This code will over-ride the default DataWriter class to use the dateFormat property defined in the JsonStore Field mappings.

//This over-ride fixes JsonWriter not using the JsonReader dateFormat when writing

Ext.override(Ext.data.DataWriter, { toHash : function(rec) {
var map = rec.fields.map,
data = {},
raw = (this.writeAllFields === false && rec.phantom === false) ? rec.getChanges() : rec.data,
m;

Ext.iterate(raw, function(prop, value){
if((m = map[prop])){
var key = m.mapping ? m.mapping : m.name;
if (m.dateFormat && Ext.isDate(value)) {
data[key] = value.format(m.dateFormat);
} else {
data[key] = value;
}
}
});
// we don't want to write Ext auto-generated id to hash. Careful not to remove it on Models not having auto-increment pk though.
// We can tell its not auto-increment if the user defined a DataReader field for it *and* that field's value is non-empty.
// we could also do a RegExp here for the Ext.data.Record AUTO_ID prefix.
if (rec.phantom) {
if (rec.fields.containsKey(this.meta.idProperty) && Ext.isEmpty(rec.data[this.meta.idProperty])) {
delete data[this.meta.idProperty];
}
} else {
data[this.meta.idProperty] = rec.id
}
return data;
}
});

 


Count number of tables in the database – MySQL

mysql girl in black halter top

Determining the number of tables that are contained in a MySQL database is very straight-forward, although it is an often asked question. The simplest way to accomplish this is using the following SQL query. In this query, you will provide the database, and the SQL will access MySQL’s internal data scheme (information_schema).

SELECT count(*) as ‘Tables’, table_schema as ‘Database’
FROM information_schema.TABLES
WHERE table_schema= ‘The Database Name’
GROUP BY table_schema

If you need to retrieve this information using PHP, use can use the following code. It creates a connection, runs the query, and returns the number of rows retrieved. This query you notice is slightly different than the one above, often there are multiple ways to get the same result!

$conn = mysql_connect(‘localhost’, ‘USERNAME’, ‘PASSWORD’, 1, 65536);
$res = mysql_query( “select table_name from information_schema.tables where table_schema=’test'”, $conn );
echo mysql_num_rows( $res );

And lastly, there is one more way that you can retrieve the number of tables in a database using PHP and the Show Tables command. In this example, you will need to be connected to a server, and have the database set to the one you are querying against.

echo “<pre>”;
$tbl_List = mysql_query(“SHOW TABLES”);
$i=0;
while($tables = mysql_tablename($tbl_List,$i))
{

echo $tables;

$i++;
}
echo “<br />Table count = $i”;
echo “</pre>”;


What database does Facebook use?

redhead girl using computer

What database does Facebook use is one of the most common questions asked when folks start taking about what database is the most scalable for large scale web applications.   In fact, it is usually a person who is an open source proponent, and knows very well that Facebook uses MySQL as their core database engine.  Because of this fact, this is often the single biggest reason that developers use to push to get MySQL used in their company.  I would imagine that is why it is a very popular Google query.

While Facebook uses MySQL, they do not use it as-is out of the box.  In fact, their team has submitted numerous high-performance enhancements to the MySQL core and Innodb plug-in.  Their main focus has been on adding performance counters to Innodb.  Other changes focused on the IO sub-system, including the following new features :

  • innodb_io_capacity – sets the IO capacity of the server to determine rate limits for background IO
  • innodb_read_io_threads, innodb_write_io_threads – set the number of background IO threads
  • innodb_max_merged_io – sets the maximum number of adjacent IO requests that may be merged into a large IO request

Facebook uses MySQL as a key-value store in which data is randomly distributed across a large set of logical instances. These logical instances are spread out across physical nodes and load balancing is done at the physical node level.  Facebook has developed a partitioning scheme in which a global ID is assigned to all user data. They also have a custom archiving scheme that is based on how frequent and recent data is on a per-user basis. Most data is distributed randomly.  Amazingly, it has been rumored that Facebook has 1800 MySQL servers, but only 3 full-time DBAs.

Facebook primarily uses MySQL for structured data storage such as wall posts, user information, etc. This data is replicated between their various data centers. For blob storage (photos, video, etc.), Facebook makes use of a custom solution that involves a CDN externally and NFS internally.

It is also important to note that Facebook makes heavy use of Memcache,  a memory caching system that is used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce reading time. Memcache is Facebook’s primary form of caching and greatly reduces the database load. Having a caching system allows Facebook to be as fast as it is at recalling your data. If it doesn’t have to go to the database it will just fetch your data from the cache based on your user ID.

So, while “What database does Facebook use?” seems like a simple question, you can see that they have added a variety of other systems to make it truly web scalable.  But, still feel free to use the argument, “MySQL is as good or better than Oracle or MS SQL Server, heck, even Facebook uses it, and they have 500 Million users!”.