Computer Tips that Help Small Businesses Operate P

Computer Tips that Help Small Businesses Operate Profitably

by: Sharron Senter

When working properly, computers enable small businesses to make big business profits; however, get booted off your computer and you’re suddenly starving. In most instances, computers usually act up due to lack of care by its owners. Here are four computer tips that’ll keep you up and running smoothly.

Tip #1 Back up your computer’s data no less than once a month.

Too often a small business is thrust back to infancy because it didn’t take time to back up precious data; information that took hours upon hours to create. Information that’s irreplaceable, such as customer databases or employment contracts. Keep in mind you’ll want to back up your written data as well as check books such as Quicken and email address books. If your computer has a CD writer, it’s simple and quick to back up your data. Simply select the files and folders you want to back up and copy them to the writer.

Or if you have an older computer, copy your most important files to a floppy disk. Either way, don’t stop there; now take the CD or disk and store it in a firesafe box! To expedite the process, organize your files within folders so you can quickly grab and copy one or two folders.

Tip #2 – Don’t turn off your computer every evening.

Too frequently computer users turn their computers off every evening. This is unnecessary and not recommended. A computer’s components are at their most vulnerable when turned on and off. When a computer has to heat up [turn on] or cool down [turn off] it’s at this precise moment components fail. It’s recommended you turn your computer off once or twice a week or only when necessary, such as from a power outage. However, don’t do the opposite and never turn your computer off, since many antivirus programs require a computer reboot be performed before new virus patches take effect.

Tip #3 Automate antivirus software so it updates automatically no less than once a week.

Depending on your software, you may need to prompt it to update. Unfortunately, there are people with too much time on their hands who desire to attack and make your computer unusable. A computer user is not ultimately protected from viruses and spyware [popups, cookies, etc.] unless you’re using a combination of antivirus and antispyware software and a firewall, a piece of hardware that protects computers from being hacked. You must have all three pieces in order to ward off viruses, lurkers and attacks. What’s more, most of the attacks are very quiet. You don’t know someone is on your computer. Instead, they secretly store information, such as child pornography or music MP3s on your computer, since itกs illegal, and redirect childporno or MP3 seekers to your computer instead of theirs.

Tip #4 Install a firewall if you keep your computer constantly on.

Using a broadband or DSL connection dramatically increases your exposure to being hacked. It only takes an average of 15 minutes being online before a homebased computer is attacked. The only true way to protect a computer from a hacker is to install a hardware firewall. It’s a misconception that softwarebased firewalls ultimately defend computers. This is simply not true. Computers must have a hardware firewall for ultimate protection, such as a SonicWall or Netscreen firewall, a component installed between a home user’s cable or DSL connection and their computer.

About The Author

Sharron Senter is cofounder of http://www.VisitingGeeks.com an on site computer repair, security and networking company that helps families, home power users and small businesses north of Boston, Southern NH and Maine. Visiting Geeks’ technicians are crackerjacks at squashing viruses, popups and securing and making computers perform faster. To reach Visiting Geeks call (978) 3464087 or visit http://www.VisitingGeeks.com Sharron’s also the author of, ขMake Money While Sleeping.ข Learn more at http://www.sharronsenter.com/fs_increase_seo.shtml

This article was posted on October 05, 2004

by Sharron Senter

Got Virus?

Got Virus?

by: Woody Bowers

GOT VIRUS? Your Data is NOT lost forever!

In the wake of so many computer viruses running wild, ขHope is not lostข!

With the recent release of such viruses as: mydoom; netsky; mofei, lovegate and many more destructive viruses, there is an affordable solution to recover your lost files from your hard drive.

Selecting a Data Recovery Service Company can be a challenging and confusing undertaking to say the least.

ECO Data Recovery located in Palm Beach Gardens, Florida has come to the rescue of many individuals, small business and large corporations around the world. When down time means lost revenue and it seems like there is no light at the end of the tunnel, you can always count on ECO Data Recovery to get you up and operating asap.

These days you never know when your computer system will go down due to viruses, sabotage or natural disaster. We suggest that everyone backup their files regularly. Nobody ever wants to think about their hard drive crashing or a virus taking over their computer, so backing up your files is the last thing on your mind.

Often time is of the essence. We know that when your business is down, fast is never fast enough, therefore, ECO offers an expedited service for time sensitive situations.

As technology advances, so do the skills of what we refer to as ขHackersข. These ขHackersข are responsible for many of the damaged files we have recovered. As the ขHackersข skills evolve, so must our teams of engineers. We understand that there will always be some hacker out there with the goal of causing ขchaosข. Eco Data Recovery will be there to undo the damage they may have done and get you up and running in the fastest time possible.

Viruses are not the only cause of lost files!

When a hard drive is making an awful noise, more often then not you have a hardware problem. ECO Chief Engineer, Sean Flanders, warns ขIf you hear strange noises emanating from your computer, shut it off immediately before further damage is incurredข.

When a drive is still grasping to life (barely spinning) many times people try the cheapest solution and attempt to run a data recovery software utility. This is a major mistake! ขAttempting to utilize recovery software can make your data hard to salvage if not impossible in some cases. ขThese programs may write data on the drive which then overwrites your original data, making data recovery almost impossibleข, states Brian Cain, VP of sales at ECO.

Take heed in the words of Charles Roover, President of Eco Data Recovery, ขBe aware of the fate that could befall your computer and/or network and take precautions. Backup your files often! Nobody likes to think about losing their data, however, when you have a disaster, we’re there to rescue you!ข

Over the past 10 years ECO Data Recovery has saved many individuals and companies by retrieving their lost data! We’re only a phone call away!

http://www.ecodatarecovery.com

About The Author

Woody Bowers

Dir. Business Development

Eco Data Recovery

(800)3393412

woody@ecodatarecovery.com

This article was posted on March 11, 2004

by Woody Bowers

Search Technologies

Search Technologies

by: Max Maglias

Each of us has been faced with the problem of searching for information more than once. Irregardless of the data source we are using (Internet, file system on our hard drive, data base or a global information system of a big company) the problems can be multiple and include the physical volume of the data base searched, the information being unstructured, different file types and also the complexity of accurately wording the search query. We have already reached the stage when the amount of data on one single PC is comparable to the amount of text data stored in a proper library. And as to the unstructured data flows, in future they are only going to increase, and at a very rapid tempo. If for an average user this might be just a minor misfortune, for a big company absence of control over information can mean significant problems. So the necessity to create search systems and technologies simplifying and accelerating access to the necessary information, originated long ago. Such systems are numerous and moreover not every one of them is based on a unique technology. And the task of choosing the right one depends directly on the specific tasks to be solved in the future. While the demand for the perfect data searching and processing tools is steadily growing let’s consider the state of affairs with the supply side.

Not going deeply into the various peculiarities of the technology, all the searching programs and systems can be divided into three groups. These are: global Internet systems, turnkey business solutions (corporate data searching and processing technologies) and simple phrasal or file search on a local computer. Different directions presumably mean different solutions.

Local search

Everything is clear about search on a local PC. It’s not remarkable for any particular functionality features accept for the choice of file type (media, text etc.) and the search destination. Just enter the name of the searched file (or part of text, for example in the Word format) and that’s it. The speed and result depend fully on the text entered into the query line. There is zero intellectuality in this: simply looking through the available files to define their relevance. This is in its sense explicable: what’s the use of creating a sophisticated system for such uncomplicated needs.

Global search technologies

Matters stand totally different with the search systems operating in the global network. One can’t rely simply on looking through the available data. Huge volume (Yandex for instance can boast the indexing capacity of more than 11 terabyte of data) of the global chaos of unstructured information will make the simple search not only ineffective but also long and laborconsuming. That’s why lately the focus has shifted towards optimizing and improving quality characteristics of search. But the scheme is still very simple (except for the secret innovations of every separate system) the phrasal search through the indexed data base with proper consideration for morphology and synonyms. Undoubtedly, such an approach works but doesn’t solve the problem completely. Reading dozens of various articles dedicated to improving search with the help of Google or Yandex, one can drive at the conclusion that without knowing the hidden opportunities of these systems finding a relevant document by the query is a matter of more than a minute, and sometimes more than an hour. The problem is that such a realization of search is very dependent on the query word or phrase, entered by the user. The more indistinct the query the worse is the search. This has become an axiom, or dogma, whichever you prefer.

Of course, intelligently using the key functions of the search systems and properly defining the phrase by which the documents and sites are searched, it is possible to get acceptable results. But this would be the result of painstaking mental work and time wasted on looking through irrelevant information with a hope to at least find some clues on how to upgrade the search query. In general, the scheme is the following: enter the phrase, look through several results, making sure that the query was not the right one, enter a new phrase and the stages are repeated till the relevancy of results achieves the highest possible level. But even in that case the chances to find the right document are still few. No average user will voluntary go for the sophistication of ขadvanced searchข (although it is equipped with a number of very useful functions such as the choice of language, file format etc.). The best would be to simply insert the word or phrase and get a ready answer, without particular concern for the means of getting it. Let the horse think – it has a big head. Maybe this is not exactly up to the point, but one of the Google search functions is called ขI am feeling lucky!ข characterizes very well the existent searching technologies. Nevertheless, the technology works, not ideally and not always justifying the hopes, but if you allow for the complexity of searching through the chaos of Internet data volume, it could be acceptable.

Corporate systems

The third on the list are the turnkey solutions based on the searching technologies. They are meant for serious companies and corporations, possessing really large data bases and staffed with all sorts of information systems and documents. In principle, the technologies themselves can also be used for home needs. For example, a programmer working remotely from the office will make good use of the search to access randomly located on his hard drive program source codes. But these are particulars. The main application of the technology is still solving the problem of quickly and accurately searching through large data volumes and working with various information sources. Such systems usually operate by a very simple scheme (although there are undoubtedly numerous unique methods of indexing and processing queries underneath the surface): phrasal search, with proper consideration for all the stem forms, synonyms etc. which once again leads us to the problem of human resource. When using such technology the user should first word the query phrases which are going to be the search criteria and presumably met in the necessary documents to be retrieved. But there is no guarantee that the user will be able to independently choose or remember the correct phrase and furthermore, that the search by this phrase will be satisfactory.

One more key moment is the speed of processing a query. Of course, when using the whole document instead of a couple of words, the accuracy of search increases manifold. But up to date, such an opportunity has not been used because of the high capacity drain of such a process. The point is that search by words or phrases will not provide us with a highly relevant similarity of results. And the search by phrase equal in its length the whole document consumes much time and computer resources. Here is an example: while processing the query by one word there is no considerable difference in speed: whether it’s 0,1 or 0,001 second is not of crucial importance to the user. But when you take an average size document which contains about 2000 unique words, then the search with consideration for morphology (stem forms) and thesaurus (synonyms), as well as generating a relevant list of results in case of search by key words will take several dozens of minutes (which is unacceptable for a user).

The interim summary

As we can see, currently existing systems and search technologies, although properly functioning, don’t solve the problem of search completely. Where speed is acceptable the relevancy leaves more to be desired. If the search is accurate and adequate, it consumes lots of time and resources. It is of course possible to solve the problem by a very obvious manner – by increasing the computer capacity. But equipping the office with dozens of ultrafast computers which will continuously process phrasal queries consisting of thousands of unique words, struggling through gigabytes of incoming correspondence, technical literature, final reports and other information is more than irrational and disadvantageous. There is a better way.

The unique similar content search

At present many companies are intensively working on developing full text search. The calculation speeds allow creating technologies that enable queries in different exponents and wide array of supplementary conditions. The experience in creating phrasal search provides these companies with an expertise to further develop and perfect the search technology. In particular, one of the most popular searches is the Google, and namely one of its functions called the ขsimilar pagesข. Using this function enables the user to view the pages of maximum similarity in their content to the sample one. Functioning in principle, this function does not yet allow getting relevant results – they are mostly vague and of low relevancy and furthermore, sometimes utilizing this function shows complete absence of similar pages as a result. Most probably, this is the result of the chaotic and unstructured nature of information in the Internet. But once the precedent has been created, the advent of the perfect search without a hitch is just a matter of time.

What concerns the corporate data processing and knowledge retrieval systems, here the matters stand much worse. The functioning (not existing on paper) technologies are very few. And no giant or the so called search technology guru has so far succeeded in creating a real similar content search. Maybe, the reason is that it’s not desperately needed, maybe – too hard to implement. But there is a functioning one though.

SoftInform Search Technology, developed by SoftInform, is the technology of searching for documents similar in their content to the sample. It enables fast and accurate search for documents of similar content in any volume of data. The technology is based on the mathematical model of analyzing the document structure and selecting the words, word combinations and text arrays, which results in forming a list of documents of maximum similarity the sample text abstract with the relevancy percent defined. In contrast to the standard phrasal search by the similar content search there is no need to determine the key words beforehand – the search is conducted through the whole document. The technology works with several sources of information that can be stored both in text files of txt, doc, rtf, pdf, htm, html formats, and the information systems of the most popular data bases (Access, MS SQL, Oracle, as well as any SQLsupporting data bases). It also additionally supports the synonyms and important words functions that enable to carry out a more specific search.

The similar search technology enables to significantly cut time wasted on searching and reviewing the same or very similar documents, diminish the processing time at the stage of entering data into the archive by avoiding the duplicate documents and forming sets of data by a certain subject. Another advantage of the SoftInform technology is that it’s not so sensitive to the computer capacity and allows processing data at a very high speed even on ordinary office computers.

This technology is not just a theoretic development. It has been tested and successfully implemented in a project of giving legal advice via phone, where the speed of information retrieval is of crucial importance. And it will undoubtedly be more than useful in any knowledge base, analytical service and support department of any large firm. Universality and effectiveness of the SoftInform Search Technology allows solving a wide spectrum of problems, arising while processing information. These include the fuzziness of information (at the document entering stage it is possible to immediately define whether such a document already belongs to the data base or not) and the similarity analysis of the documents which are already entered into the data base, and the search for semantically similar documents which saves time spent on selecting the appropriate key words and viewing the irrelevant documents.

Perspectives

Besides its primary assignment (fast and high quality search for information in huge volume such as texts, archives, data bases) an Internet direction could also be defined. For example, it is possible to work out an expert system to process incoming correspondence and news which will become an important tool for analysts from different companies. Mainly, this will be possible due to the unique similar content search technology, absent from any of the existent systems so far except for the SearchInform. The problem of spamming search engines with the so called doorways (hidden pages with key words redirecting to the site’s main pages and used to increase the page rating with the search engines) and the email spam problem (a more intellectual analysis would ensure higher level of security) would also be solved with the help of this technology. But the most interesting perspective of the SoftInform Search technology is creating a new Internet search engine, the main competitive advantage of which would be ability to search not just by key words, but also for similar web pages, which will add to the flexibility of search making it more comfortable and efficient.

To draw a conclusion, it could be stated with confidence that the future belongs to the full text search technologies, both in the Internet and the corporate search systems. Unlimited development potential, adequacy of the results and processing speed of any size of query make this technology much more comfortable and in high demand. SoftInform Search technology might not be the pioneer, but it’s a functioning, stable and unique one with no existent analogues (which can be proved by the active Eurasian patent). To my mind, even with the help of the ขsimilar searchข it will be difficult to find a similar technology.

About The Author

Max Maglias

[Phone] 2197964

[Email] press@searchinform.com

[Website] http://www.searchinform.com

This article was posted on August 17, 2005

by Max Maglias

Microsoft(r) Exchange Server Utilities – ESEutil &

Microsoft(r) Exchange Server Utilities – ESEutil & ISinteg

by: Troy Werelius

Microsoft includes two command line utilities with Exchange Server that are designed to accomplish various maintenance functions within the Exchange database. They are limited, complex, tedious, and time consuming when compared to the functionality contained within GOexchange. The best time to learn how to use these tools is in a lab environment before you need them. Like firearms and prescription medications, these tools can be dangerous if you don’t understand how they work and when to use them. Imagine shooting a shotgun at a container full of water—a graphic demonstration of what can happen when you mishandle a powerful tool. These two utilities are named ESEutil and ISinteg.

ESEutil checks and fixes individual database tables and ISinteg checks and fixes the links between tables.

To better understand the difference between ESEutil and ISinteg, let’s use a building construction analogy.

Running ESEutil is like having a structural engineer check your houseกs foundation. The engineer doesn’t care whatกs inside the house. The engineer cares only whether the underlying structure is sound.

Running ISinteg is like having an interior decorator come inside your house to check the way youกve laid out your furnishings. The decorator doesn’t care about the houseกs foundation. The decorator cares only whether the roomsก layout and decor meet with their approval.

As you can see from the analogy above, both ESEutil and ISinteg are vastly different utilities, but they are complimentary and in some ways dependent upon each other to provide proper Exchange maintenance. In the next section, we will provide a more indepth description of these two Microsoft Exchange utilities.

About ESEutil

ESEutil checks and fixes individual database tables but does not check the mail data contained in the Extensible Storage Engine (ESE) database. Objectoriented databases like Microsoft Exchange consist of big, structured sequential files connected by a set of indexes. The underlying database technology that controls these files is called Indexed Sequential Access Method, or ISAM. The ESE database engine exposes the flat ISAM structure as a hierarchy of objects.

The function of ESEutil is to examine these individually indexed object pages, check them for correctness by comparing a computed checksum against a checksum stored in the page header, and verify that each pageกs data is consistent.

ESEutil isn’t for casual use. So, don’t use ESEutil unless you absolutely need to run it and you understand what it does. To understand ESEutil, you need to know about the format of the ESE database in which ESEutil works and you need to be familiar with ESEutilกs many modes of operation.

ESEutil is a useful tool because it can operate in many modes. Each mode, however performs different functions with limitations or caveats.

Defragmentation: ESEutil /d [options]

Recovery: ESEutil /r [options]

Integrity: ESEutil /g

Repair: ESEutil /p [options]

Checksum: ESEutil /k [options]

The way that each of these functions is executed within the utility is to use a cryptic MSDOSlike command structure as the parameter qualifier. For example, in order to run the defragmenter portion of the utility, an administrator would run ขESEutil /d [options]ข and so on. For additional information on ESEutil, please refer to the GOexchange FAQ on our website – Microsoft ESEutil: http://www.goexchange.com/faq_GEvsMStools4.html

We are not going to attempt to cover all the potential pitfalls with ESEutil, however, here are a few major issues regarding ESEutil to keep in mind:

There are times when it is appropriate to use ESEutil on its own, however, a complete maintenance process includes the combined use of specific ESEutil and ISinteg commands, as well as other steps that must be undertaken.

ESEutil is very powerful tool and, if the commands are entered improperly or in an incorrect order, the results can be catastrophic.

The ESEutil command structure can be very confusing and, at times, misleading. Changing one letter in the command structure executes a completely different utility function, and the results to an Exchange database can be disastrous.

Below are a few of the many different available modes and options for ESEutil, each of which can have very different results on a database. NOTE: For brevity we have not included entire command statements.

ขESEutil /dข will defragment the designated database and is a fairly straight forward mode of operation that is commonly used.

Running a manual offline defragmentation is only part of the process that should be completed in order to keep the databases healthy. Many administrators run ESEutil on a database to remove deleted items and regain white space then, mistakenly assume that by doing so, the process is complete. Performing this task, however, doesn’t check or address issues that may exist within the mail data itself, and it won’t fix the links between the tables of an ESE database. The database now contains a higher percentage of errors, warnings, and minor inconsistencies than it did prior to defragmentation. NOTE: Running ESEutil repeatedly without implementing a complete offline maintenance process is certain recipe for disaster.

ขESEutil /d /pข will have a slightly different result.

The ข/dข tells ESEutil to defragment the designated database. The ข/pข option used with the ข/dข instructs ESEutil to leave the newly created defragmented database in the temporary work area and not to overwrite the original database.

Now slightly modify the command to ขESEutil /pข and the actions taken on the designated database are extremely different. The ข/pข evokes the Exchange ขRepairข mode. At first glance this sounds like a great thing to do, and it couldn’t hurt to try because repairing the database should be beneficial right? Wrong!

This command actually invokes a ขHard Repairข mode of ESEutil. This means that ESEutil will attempt to repair corrupt pages, but it makes no attempt to put the database in a consistent state.

If it finds problems that cannot be corrected, then those pages will be discarded. Each page contains data therefore each discarded page represents data loss. Discarding certain pages of the database can actually render it useless. In other words, wave goodbye to your data.

Sometimes, using the repair mode is the only way to fix a database. In the vast majority of situations, however, it should be avoided except as a last resort and there are specific steps that should be taken pre and post use of ขRepair /pข mode.

About ISinteg

The purpose of the Microsoft ISinteg utility is to inspect and fix weaknesses within the information store (IS). ISinteg looks at the mailboxes, public folders, and other parts of the IS, checking for anything that appears to be out of place. ISinteg scans the tables and Btrees that organize the ESE pages into their logical structures. In addition, the tool looks for orphaned objects, or objects that have incorrect values or references.

Because ISinteg focuses on the logical level rather than physical database structure, it can repair and recover data that ESEutil can’t. When looking at the physical database level, ESEutil might find the data to be valid because it looks for things such as page integrity and BTree structure. Data that appears valid to ESEutil from a physical view of the database might not be valid from a logical view. For example, data for various IS tables like the message, folder, or attachments table may be intact, but the relationships among tables or records within tables may be broken or incorrect because of corruption in the logical structure. This corruption can render the database unusable.

Logical corruption of your Exchange Server databases is problematic and much more difficult to diagnose and repair than physical corruption. The user and administrator are, typically, unaware of a logical corruption occurrence. No specific symptoms identify logical corruption. Often, when an administrator discovers the logical corruption, itกs too late for any repairs to take place.

You can run ISinteg one of two ways:

Default mode, in which the tool runs the tests you specify and reports its findings.

Fix mode, where you specify optional switches instructing ISinteg to run the specified tests and attempt to fix whatever it can.

The most important thing about running ISinteg is to run the command until it no longer reports any problems. Just running the command once does not guarantee that the information store is functioning properly. Depending on the size of the information store, the process can take a long time, however, it ensures that the databases are properly functional. For additional information on ISinteg, please refer to the GOexchange FAQ on our website – Microsoft ISinteg: http://www.goexchange.com/faq_GEvsMStools5.html

About The Author

Troy Werelius is CEO of Lucid8 LLC, the creators of ขGOexchange, the Automated Maintenance Solution for Microsoft Exchange 5.5, 2000 and 2003 Serversข. GOexchange prevents disasters, repairs problems, and accelerates performance. Visit http://www.goexchange.com for a free DEMO copy of GOexchange.

This article was posted on February 22, 2005

by Troy Werelius

MySQL Database Handling in PHP

MySQL Database Handling in PHP

by: John L

Most interactive websites nowadays require data to be presented dynamically and interactively based on input from the user. For example, a customer may need to log into a retail website to check his purchasing history. In this instance, the website would have stored two types of data in order for the customer to perform the check – the customer’s personal login details; and the customer’s purchased items. This data can be stored in two types of storage – flat files or databases.

Flat files are only feasible in very low to low volume websites as flat files have 3 inherent weaknesses:

The inability to index the data. This makes it necessary to potentially read ALL the data sequentially. This is a major problem if there are a lot of records in the flat file because the time required to read the flat file is proportionate to the number of records in the flat file.

The inability to efficiently control access by users to the data

The inefficient storage of the data. In most cases, the data would not be encrypted or compressed as this would exacerbate the problem no. 1 above

The alternative which is, in my opinion, the only feasible method, is to store the data in a database. One of the most prevalent databases in use is MySQL. Data that is stored in a database can easily be indexed, managed and stored efficiently. Besides that, most databases also provide a suite of accompanying utilities that allow the database administrator to maintain the database – for example, backup and restore, etc.

Websites scripted using PHP are very well suited for the MySQL database as PHP has a custom and integrated MySQL module that communicates very efficiently with MySQL. PHP can also communicate with MySQL through the standard ODBC as MySQL is ODBCcompliant, However, this will not be as efficient as using the custom MySQL module for PHP.

The rest of this article is a tutorial on how to use PHP to:

Connect to a MySQL database

Execute standard SQL statements against the MySQL database

Starting a Session with MySQL

Before the PHP script can communicate with the database to query, insert or update the database, the PHP script will first need to connect to the MySQL server and specify which database in the MySQL server to operate on.

The mysql_connect() and mysql_select_db() functions are provided for this purpose. In order to connect to the MySQL server, the server name/address; a username; and a valid password is required. Once a connection is successful, the database needs to be specified.

The following 2 code excerpts illustrate how to perform the server connection and database selection:

@mysql_connect(ก[servername]ก, ก[username]ก, ก[password]ก) or die(กCannot connect to DB!ก);

@mysql_select_db(ก[databasename]ก) or die(กCannot select DB!ก);

The @ operator is used to suppress any error messages that mysql_connect() and mysql_select_db() functions may produce if an error occurred. The die() function is used to end the script execution and display a custom error message.

Executing SQL Statements against a MySQL database

Once the connection and database selection is successfully performed, the PHP script can now proceed to operate on the database using standard SQL statements. The mysql_query() function is used for executing standard SQL statements against the database. In the following example, the PHP script queries a table called tbl_login in the previously selected database to determine if a username/password pair provided by the user is valid.

Assumption:

The tbl_login table has 3 columns named login, password, last_logged_in. The last_logged_in column stores the time that the user last logged into the system.

// The $username and $passwd variable should rightly be set by the login form

// through the POST method. For the purpose of this example, we’re manually coding it.

$username = ขjohnข;

$passwd = ขmypasswordข;

// We generate a SELECT SQL statement for execution.

$sql=กSELECT * FROM tbl_login WHERE login = กก.$username.กก AND password = กก.$passwd.กกก;

// Execute the SQL statement against the currently selected database.

// The results will be stored in the $r variable.

$r = mysql_query($sql);

// After the mysql_query() command executes, the $r variable is examined to

// determine of the mysql_query() was successfully executed.

if(!$r) {

$err=mysql_error();

print $err;

exit();

}

// If everything went well, check if the query returned a result – i.e. if the username/password

// pair was found in the database. The mysql_affected_rows() function is used for this purpose.

// mysql_affected_rows() will return the number of rows in the database table that was affected

// by the last query

if(mysql_affected_rows()==0){

print กUsername/password pair is invalid. Please try again.ก;

}

else {

// If successful, read out the last logged in time into a $last variable for display to the user

$row=mysql_fetch_array($r);

$last=$row[กlast_logged_inก];

print ขLogin successful. You last logged in at ข.$last.ข.ข;

}

The above example demonstrated how a SELECT SQL statement is executed against the selected database. The same method is used to execute other SQL statements (e.g. UPDATE, INSERT, DELETE, etc.) against the database using the mysql_query() and mysql_affected_rows() functions.

About The Author

This PHP scripting article is written by John L. John L is the Webmaster of The Ultimate BMW Blog! (http://www.bimmercenter.com).

The Ultimate BMW Blog!

daboss@bimmercenter.com

This article was posted on November 07, 2004

by John L

Microsoft CRM integration: Oracle database access

Microsoft CRM integration: Oracle database access from MS CRM

by: Boris Makushkin

Today’s article topic is customization possibility demonstration for user web interface of Microsoft CRM. As an example we’ll use MS CRM integration with ASP.Net application, accessing customer data access, when customers are stored in Oracle 10g database. Let’s begin:

1. First, let’s create the table to store customer information in Oracle database. We’ll use web application iSQL for table metadata manipulation:

2. Table is now created and contains four fields: CUSTOMER_ID, FIRST_NAME, LAST_NAME и ADDRESS. Fill it with text data:

3. Now we’ll work with data access to Oracle database from ASP.Net application. We should download from Oracle site http://www.oracle.com Windows Instant Client. We don’t have to install it – just unpack all the files in the directory of your choice, for example c:\oracle and set environmental variable TNS_ADMIN, pointing to this directorty.

4. In c:\oracle directory (or where TNS_ADMIN point out) create file tnsnames.ora as following (change host and service names):

ORCL1 =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = oraclehost.youtdomain.com)(PORT = 1521))

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = ORCL1)

)

)

5. Make correction to windows registry to have MS SQL Linked Server work properly withOracle OLE DB Provider. In the hive KEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSDTC\MTxOCI make these changes:

OracleXaLib = กoracleclient8.dllก

OracleSqlLib = กorasql8.dllก

OracleOciLib = กoci.dllก

6. Now let’s create Linked Server in MS SQL Server 2000:

Note: in the Security tab we need to use security context with the credentials, having valid access to Oracle Database.

7. Linked Server is ready – let’s test it functioning – open table list. We should see customer table there:

8. Now we’ll create stored procedure for Oracle data access:

SET ANSI_NULLS ON

SET ANSI_WARNINGS ON

GO

CREATE PROCEDURE MyCustomersList AS

SELECT * FROM OPENQUERY(ORACLE, กSELECT * FROM Customerก)

RETURN

9. Next step is customizing Microsoft CRM using interface. We’ll add customer list button into Quote screen toolbar. Edit isv.config:

Change Url to your host name.

10. To create ASPX page we’ll use RAD for ASP.Net WebMatrix:

11. Create new page for data access:

12. Change it’s code to access our data:

Sub Page_Load(Sender As Object, E As EventArgs)

Dim ConnectionString As String = กserver=(local);database=Albaspectrum;trusted_connection=trueก

Dim CommandText As String = กEXEC MyCustomersListก

Dim myConnection As New SqlConnection(ConnectionString)

Dim myCommand As New SqlCommand(CommandText, myConnection)

myConnection.Open()

DataGrid1.DataSource = myCommand.ExecuteReader(CommandBehavior.CloseConnection)

DataGrid1.DataBind()

End Sub

13. Now we’ll test our web application by calling it from MS CRM:

Happy programming, implementation, customization and modification! If you want us to do the job – call use 16309615918, 18665280577, Europe: +49 231 4387600! help@albaspectrum.com

About The Author

Boris Makushkin is Lead Software Developer in Alba Spectrum Technologies – USA nationwide Oracle, Navision, Microsoft CRM, Microsoft Great Plains customization company, serving Chicago, California, Arizona, Colorado, Texas, Georgia, Florida, New York, Canada, Australia, UK, Russia, Europe and internationally ( http://www.albaspectrum.com ), he is Oracle, Unix, Microsoft CRM SDK, Navision, C#, VB.Net, SQL developer.

BorisM@albaspectrum.com

This article was posted on February 21

by Boris Makushkin

Organizing Your Data to Write Better Copy

Organizing Your Data to Write Better Copy

by: Neroli Lacey

Last quarter I talked about interviewing / gathering data. So now you’ve got several thousand words of notes, hopefully digitally recorded. What comes next?
GETTING ORGANIZED
I suggested organizing your interview questions into 4 groups. I’m going to label them for you A, B, C, D.

·what is the business problem? = A
·what is the high level solution? = B
·can you tell me more about the solution? = C
·why should I trust you (as my vendor?) = D

Any decent piece of writing has a beginning, a middle and an end. So before you start editing / writing you want a map, to show you where you are going. Take a blank sheet of paper, write four major headings and label them A, B, C, D, as above.
Now read your notes. When you find data relevant to ขAข (the business problem), underline that copy and mark a big ขAข in the margin (in red?) . Keep working through until you have marked up relevant copy for all four sections of your piece.
You will be leaving out anything that does not seem suitable as you go.
THE CUT AND PASTE JOB
Next comes a cut and paste job. Group together all the ขAขs, then the ขBขs, ขCขs and ขDขs.
Next, take a look at all the ideas you have in the A group. It helps if you take a new sheet of paper and write a list of the ideas or facts in the A group. Now prioritize. Be ruthless. And trust your first instinct. If an idea seems to leap out and have life, put it first. The less important ones come later. Weed out any repetition or weak data. Now you work on flow. Do you have a logical flow of ideas that your reader can follow? Are you telling him/ her a story that you yourself could believe in?
You will go through the same exercise with the remaining blocks of notes, ie ขBข,ขCข and ขDข.
EDITING IS PRIORITIZING
Editing is prioritizing. Often you will want to limit a list of ideas to 3. Three has a flow to it. And is about as much as any reader or listener can grasp at one sitting.
Finally you polish. Now you are reading for flow or musicality.You are cutting out superfluous ideas and words.
This is the long way to write.
THE SHORT WAY TO WRITE
The short way is to sift and prioritize all your notes in your mind ie you turn on your thinking tool. The key idea will pop into view, and hey presto, you begin writing about that one. You have a feeling for what comes next and what after that. You understand how to prioritize your ideas. Soon with a bit of jiggling ideas around the page, your story has a beginning, a middle and an end.
You can teach yourself the short way by writing the long way, again and again. Or by turning copy round in the middle of the night for an 07.00am deadline as I often had to do as a newspaper feature writer.
ขWhen we encounter a natural style, we are astonished and delighted: for we expected to see an author, and we find a man.ข Blaise Pascal. Quoted with thanks to John R. Trimble, Writing with Style published by Prentice Hall.
Do you have a robust marketing plan to execute against? How clear and persuasive is your website, brochure copy or direct mail? Call Neroli Lacey NOW to win more business TODAY.
CALL ++ 612. 215. 3826 NOW
or email: neroli@beyondcommunications.com

About The Author

Iกm Neroli Lacey of Beyond Communications Inc. in Minneapolis, MN. I’ve been helping executives transform their businesses and their lives with outstanding marketing materials since 1995. VISA, 3M and Perot Systems are some of my bigger clients. I have worked with clients in Boston, San Francisco, Dallas, Austin, Minneapolis, London, Paris, Amsterdam, Dublin and Delhi. I used to be one of the top journalists in Britain writing for The Times, The Sunday Times, The Daily Telegraph, The Independent, The Guardian, The Evening Standard, New Statesman, Vogue and Tatler.
Before newspapers I was an investment banker. I grew up in London, England, studying Latin with Greek at Bristol University.
Please visit my website: www.beyondcommunications.com

Or contact me at: neroli@beyondcommunications.com

6122153826

This article was posted on April 30, 2004

by Neroli Lacey

A view on Google’s Patent: Information Retrieval B

A view on Google’s Patent: Information Retrieval Based on Historical Data

by: Peter Faber

Google doesn’t stop innovating their search engine, and there where others try to follow, Google is not just 1 step ahead, but 10 steps ahead. Their latest innovation, which actually may already be in place for a year or longer, can be found in the patent: ขInformation Retrieval Based on Historical Data.ข

The abstract of the patent is: ขA system identifies a document and obtains one or more types of history data associated with the document. The system may generate a score for the document based, at least in part, on the one or more types of history dataข.

This article has the goal to give a implified representation of this patent + contains recommendations as to what would be the best SEO techniques to obtain high rankings, with a specific focus on links. This article is the opinion of the writer and following recommendation in this article is done at your own risk.

Google’s search results have been increasingly difficult to explain and many theories have been developed on what is going on. Most popular is the ขsand boxข theory, which says that a new site is put in a virtual sand box and has to wait until it has aged before obtaining high rankings. This patent has some excellent information that can explain this phenomenon.

Information Retrieval

The information that this invention of Google is claimed to retrieve based on the historical data are:

Age/Time

Change

Trends

A score is calculated based on the above 3 factors which can then, at least partially, be used to rank the selected pages.

Historical Data

The patent describes a huge amount of historical data. The following is an overview of most items for which historical data can be measured:

Pages/sites

Links

Anchor Texts

Content

Query

Traffic

Ranking

User

Domain

Ranking Based On Information Retrieved From Historical Data

The patent describes in quite a lot of detail how selected pages are ranked based on the information retrieved from historical data. This chapter will describe the basic logic applied.

Age/Time

Of all historical data a date of inception is used to determine 4 important values:

Age

Average Age

Date

Average Date

These factors can be determined for pages, links, anchor text, content, topics, queries, etc. Comparing the age or date of a page to the average of the site for example tells the search engine if this information is relatively new or old.

Comparing the average age or date of a page to the average age or date of all pages selected for a query (keyword phrase) tells the search engine if the page is relatively new or old. This information can be used to rank the selected pages.

Comparing to an average has the advantage that there is no preset base of rules that determine the rankings of a page. For one query 6 months may be considered new (product descriptions for example) while for another page 6 days may be considered old (news items for example). It all depends on the average age.

This same logic applies to links. In order to determine how popular a page or site is, the average age of all back links tells the search engine if the popularity of the page is recent or not. It makes sense that if most back links have been obtained 4 years ago and that hardly anybody has been interested to link to this page/site since then, that the page is not as popular as the existing back links would suggest.

The patent goes even as far as determining age factors for anchor texts of links.

Change

Information changes over time. Opinions change, knowledge changes, popularity changes, etc. Like mentioned before, a page that was popular 4 years ago, may be totally forgotten now, but still have most of its backlinks that were obtained when the page actually was popular. However, if this page all the sudden becomes popular again, and new back links start showing up, the average age of the backlinks will remain high. This will prevent the page of ranking high.

Detecting changes is crucial to give old information the chance to rank high again. Consequently, the lack of change can be a reason to lower the rank of a page.

Trends

Even though comparing to averages is a great way to get information about freshness, it fails to recognize smaller events like a sudden increase in popularity of a page. Though detecting changes do help to recognize smaller events, more information can be obtained by detecting trends.

Sudden increases of popularity can be caused by seasonal events like Christmas or the Super Bowl. For this reason the search engine will try to determine trends within pages links, anchor text, content, topics, queries, etc. Detecting trends makes it possible to rank pages higher that would not be ranked high with the standard ranking methods or with comparing to average ages or dates. Google has recognized here a very important fact of information: Relevance and importance of information is (con)temporary.

Detecting Spam Using Historical Data

Having all kinds of historical data available can be used to detect search engine spam. Unexpected events that happen to a site can be an indication of spam. Obviously a strong improvement of 1 single factor would not be a direct indication of spam, generally multiple factors are showing strange behavior when a site is using spam to increase rankings. It would not be in Google’s interest to penalize a site for advertising. However, excessive advertising in sites/pages that are totally unrelated will not do your site any good.

Recommendations

Nothing changed in regards to links. This patent pretty much confirms what we at www.textlinkbrokers.com already knew and have been explaining to our customers as well. The following recommendations can be helpful:

Keep links related

Related links matter, unrelated links can be considered spam.

Build links on a continuous moderate bases

As the patent describes, the average age of your backlinks should not be too high. It is therefore wise to continue adding backlinks to secure a reasonable average age of all your backlinks. How many you need to add over time depends on your market.

Be better than the average

Very important is to be better than the average, but don’t overdo it. It would be expensive and unnecessary.

Focus on seasonal events

A good way to increase the success of your website is to set up text link campaigns for seasonal events. Start your advertising campaign 2 to 3 months before the actual event to give Google the time to find the links and update your site’s information with it. After the event you can let these links go again.

Spread links over multiple sites (unique backlinks)

A very important factor is the number of unique websites in your backlinks. Google seems to put a strong emphasis on this factor.

About The Author

Peter Faber is an Internet marketing consultant working for http://www.textlinkbrokers.com, an SEO company specialized in link building. He has his own personal blog at http://www.seoworks.com.

This article was posted on April 26

by Peter Faber

Navision Customization and Reporting – tips for Pr

Navision Customization and Reporting – tips for Programmer/IT Specialist

by: Robert Horowitz

Founded in 1984, Navision Software is a leading developer of innovative enterprise business management solutions. Now a part of the Microsoft business solutions family it’s a growing force in the mid market space. Unlike other midmarket systems, Navision supplies the same database and business logic to the 2 user installation as the 200 user. This allows your company to grow with the product without the need to force you programmer/developer to move to more expensive database platform.

Developing in Navision

C/SIDE (Client/Server Integrated Development Environment) The core of Navision is the C/SIDE. C/SIDE is the foundation for all the business management functionality of Navision. It is made up of five building blocks, called object types, which are used to create the application. These five object types are shared throughout Navision to create every application area, and give it a unified, consistent interface. This powerful language allows for the internal construction of new business logic and sophisticated reporting. Because of the internal nature of modifications it’s highly recommended that you develop all your code in ขprocessing onlyข report objects and called from the native code base. By grouping all your code in logical units, upgrades and additional modifications are easier to manage.

C/ODBC and C/FRONT Both C/ODBC and C/FRONT enable you to easily use information from Navision in familiar programs such as Microsoft Word and Microsoft Excel. The Open Database Connectivity driver for Navision (C/ODBC) is an application program interface (API) that provides a way for other applications, such as the entire Microsoft Office Suite, to send and retrieve data to and from the Navision database through the ODBC interface.

External Tool: Navision Developer’s Toolkit The Navision Developer’s Toolkit enables your Microsoft Certified Business Solutions Partner to upgrade your Navision solution to the latest version. It is used to analyze and upgrade customer and vertical solutions

Reporting Options Aside from the powerful internal reporting tool which requires an indepth knowledge of C/SIDE to make it useful the other options are:

Jet Reports Jet Reports is a complete reporting package utilizing Microsoft Excel. Using Excel you can create reports on any table of data from within any granule in Navision.

C/ODBC Using the ODBC driver, the entire Microsoft office suite and programs such as Crystal Reports can access the database. I would recommend using this tool for occasional reporting requests only. If you need to pull data out of the Navision database on a regular basis one of the other options is a better choice.

Business Analytics (SQL Server Required) Using Online Analytical Processing (OLAP) from Microsoft SQL Server™ 2000, Business Analytics organizes all of your business data into information units called cubes. Using a familiar Microsoft Outlook®style interface, Business Analytics presents this information to your desk top where easy to use analytical tools allow you to carry out targeted analysis that is tailored by you, for you

XBRL Extensible Business Reporting Language (XBRL) for Navision enables simple and dependable distribution of all a company’s financial information and ensures smooth and accurate data transfer. XBRL is an XMLbased specification that uses accepted financial reporting standards and practices to export financial reports across all software and technologies, including the Internet

Good luck in customization and reporting and if you have issues or concerns – we are here to help! If you want us to do the job give us a call 18665280577! help@albaspectrum.com

About The Author

Robert Horowitz is Certified Navision Specialist in Microsoft Business Solutions Partner Alba Spectrum Technologies – USA nationwide Navision, Great Plains, Microsoft CRM customization company, based in Chicago, California, Arizona, Texas, Florida, New York, Georgia, Washington, Colorado, Canada, UK, Australia, Moscow and having locations in multiple states and internationally (www.albaspectrum.com). You can contact Robert: welcome@albaspectrum.com.

roberth@albaspectrum.com

This article was posted on September 07, 2004

by Robert Horowitz

You Lost Your Data… Don’t Panic!

You Lost Your Data… Don’t Panic!

by: Emanuele Allenti

Inability to access your data stored on the data storage device could be caused by many reasons, from those that are easy to fix to those which are completely impossible to fix. If the damage is irreversible then data loss will occur. The causes of the failure of your hard drive or CDROM drive could vary from a bad connection due to a loose wire (which is easily recoverable) to damage to the media itself, which could still be recoverable in many cases.

As in the medical profession, the first principle of data recovery is: กdo not harmก.

If you are facing a data loss situation, what not to do is very important!

Do not power up a device that has obvious physical damage.

Do not power up a device that has shown symptoms of physical failure. For example, disks that make กobvious mechanical fault noisesก such as ticking or grinding, should not be repeatedly powered on and tested as it just makes them worse.

Activate the writeprotect switch or tab on any problem removable media such as tape cartridges and floppies. (Many good backups are overwritten during a crisis.)

Do not use free software. This is very important. Free data recovery software can be extremely dangerous and ruin your chances for a successful data recovery. Many companies offer free data recovery software also called DoItYourself (DIY) data recovery software available for download on their website.

Even the best programs only work in very specific situations. While these free tools that are available may help, they usually only help if you are encountering one of a very few specific data loss situations.

Some programs may cause further or permanent data loss. While these programs are provided with good intention, even when carefully used these utilities may cause recoverable data to be permanently lost and may cause the loss of additional data.

Anyway, there is something you CAN do; if you are having data access problems and your media has no symptoms of physical failure or damage, try and check some obvious issues before deciding if you need data recovery:

Are the power and disk cables properly connected?

Is configuration or disk information correct?

Try the defective unit with a different adapter/controller interface or on a different computer.

Is there an experienced technician at a local store or the company help desk that you can consult, if these steps are beyond your capabilities? (Make sure whoever is in contact with your data loss situation is fully aware that they should do nothing during their troubleshooting that will risk hurting your data.)

Doesn’t work? Don’t panic; if the damage occurred to the driveกs electronics, it most likely could be fixed. If the damage occurred to the, for example, system areas of the disk, leaving the data zone intact, those data could be theoretically, and (in some cases) practically, recovered by a professional.

Look on the Net for data recovery companies, ask them questions, explain them your situation. In most cases they will be able to understand your problem and fix it for a fair price.

About The Author

Emanuele Allenti is the owner of http://www.harddiskdatabackuprecovery.com a website containing tips and useful information written by experts for those interested in backup and data recovery

This article was posted on September 27, 2004

by Emanuele Allenti

Data Recovery From Laptops

Data Recovery From Laptops

by: Jakob Jelling

A lot of important data is being stored on laptops today. Laptop use has increased significantly in recent years. Increased demands for portability and convenience have been envisioned in this mobile technology.

Demand for laptops has gone up in part due to the spread of telecommuting in businesses today. Businesses often allow employees to work on laptops at home and bring them to work.

Data loss in a laptop can end up costing the company a lot of employee hours worked. Even people who use their laptops for personal use are not pleased if they lose any important data. Carrying around a laptop all day can cause stress on the machine and increase the chances of data loss.

There are many companies that can help you with the laptop data recovery process. After you contact a company, first you will likely go through an evaluation process to evaluate the extent of damage done.

A lot of companies will offer a free evaluation and a quote if they feel they can successfully perform recovery operations. It may be a good idea to shop around for services to make sure you are getting the best deal. Each company may offer a different price. Some companies even have fixed pricing.

If you are panicking over a blank laptop screen or a hard drive crash, you will be happy to learn that a high percentage of the requests data recovery companies get are successful completed. However you still need to be prepared that some of the data may never be recovered. The only way to find out is get proactive and contact representatives from data recovery firms.

Several data recovery companies have specialized departments for performing data recovery operations on laptops. You can get a lot of your data recovered and receive it in a readable format, even from badly damaged storage devices.

Laptop systems are more fragile and therefore highly susceptible to damage to their hard drives.

There are many companies online that can offer you internetbased laptop data recovery. This allows you to ship of your laptop to company headquarters and have it shipped back to you after the data has been recovered.

Some problems you may experience with your laptop is a dead screen, hard drive crash, etc. There are different types of laptop data recovery operations you can perform. Laptop data recovery can allow you to recover files after such events and after technical malfunctions. You may also be able to recover MS Word, Excel or Power Point files that you may have accidentally deleted.

Laptops can receive a lot of abuse and their hard drives can get damaged. You may need laptop data recovery support if:

You dropped your notebook and it no longer turns on.

An unsolicited email has downloaded a Trojan or virus on your laptop.

The laptop’s hard drive crashes.

You accidentally deleted some important company files and you want them retrieved in a hurry before your boss finds out.

If you have lost important company data and your boss just says to go and recover it, rapid data recovery can be a solution. Often companies can perform emergency operations to recover lost data and get you on your way in only 2448 hours. However be prepared to pay up for these services. The services will be worthwhile depending on how important the lost data is to you or your business.

If you are involved in the management of a company, you may want to keep the contact information of a reliable data recovery specialist near your desk. You never know when you may need one.

About The Author

Jakob Jelling is the founder of http://www.sitetube.com. Visit his website for the latest on planning, building, promoting and maintaining websites.

This article was posted on February 12

by Jakob Jelling

Offsite Backups Provide Digital Peace of Mind

Offsite Backups Provide Digital Peace of Mind

by: Harald Anderson

In today’s fast paced datacentric world of personal computers and consumer/business electronics (such as PDAs and digital media players) we have, as a society, developed a reliance on digital data. We have particularly developed a dependence on data stored on various magnetic media such as hard drives, removable disks, and magnetic tape. While some computer users may never have had a problem with loss of data due to viruses, Internet worms or file corruption, most of us have at some time experienced the frustration and loss of productivity that comes with the loss of computer data.

Perhaps someone in your office deleted files off the network that your entire team had been working on for months. Or maybe the corporate firewall didn’t stop the latest Internetborne virus that has a penchant for overwriting ข.docข files with junk data. Like it or not, if you connect your computer to the Internet (and in some cases, even if you don’t), your mission critical data is at risk. The question is: What can you do about it?

An excellent solution is to employ offsite backups. Offsite backup solutions allow you to store critical data that is crucial to your business or personal computing experience. Offsite backup providers make it quick and easy for you to back up your most important files to a secure, offsite facility that offers redundant storage, and round the clock accessibility to your files in the event of a critical ขsystem meltdownข. When you use an online offsite backup provider, you can be secure in knowing that your files and important information will be available to you no matter what happens to the machines you work on every day.

Even if your computer needs to be completely formatted or your laptop is stolen you can have the peace of mind that the most important part of your computing experience — the data you generate on a day to day basis — is safe and secure and always available to you.

Your DATA is your Life. Protect it.

Copyright 2005 Harald Anderson

About The Author

Harald Anderson is a freelance writer and webmaster for http://www.SafeHarborData.com an online backup service. Download your free thirty day trial and experience the Digital Peace of Mind that accompanies a secure disaster recovery routine for your business. http://www.SafeHarborData.com

This article was posted on February 01

by Harald Anderson