The purpose of this paper is to highlight how fast the divide between the good corporate governance (CG) and corporate social responsibility (CSR) is declining. The concepts covered under CG and the areas covered under CSR are no longer distinctive. Both the philosophies advocate doing good and disclosing the good done.
The study does a brief survey of Indian mythology to make its point clear that Indian philosophy is positive and it inculcates positive values in Indians, which influence their socially responsible behaviour. The study further analyses the annual reports of 50 Indian private corporate houses to show the extent of CG and CSR undertaken by them. The information is shown in the tabular form or in statement form, both are included for the purpose of the study.
In India, the CG practices are mandatory as per the clause 49 of the listing agreement of Securities and Exchange Board of India (SEBI) for all the companies listed on recognized stock exchanges of India. However; there are two parts of disclosure – mandatory and non‐mandatory. In so far as mandatory requirements are concerned, 100 per cent compliance exists. But with regard to non‐mandatory requirements the results are quite disheartening. Similarly, when the study analyzed the annual reports to examine the extent of corporate responsibility disclosure the results are equally discouraging for reasons discuss below in the introduction part of the paper.
The study has revealed certain awakening facts, which will serve as a useful guide for policy formulation in relation to Indian corporate sector.
– The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.
The purpose of this paper is to decrease the traffic created by search engines’ crawlers and solve the deep web problem using an innovative approach.
A new algorithm was formulated based on best existing algorithms to optimize the existing traffic caused by web crawlers, which is approximately 40 percent of all networking traffic. The crux of this approach is that web servers monitor and log changes and communicate them as an XML file to search engines. The XML file includes the information necessary to generate refreshed pages from existing ones and reference new pages that need to be crawled. Furthermore, the XML file is compressed to decrease its size to the minimum required.
The results of this study have shown that the traffic caused by search engines’ crawlers might be reduced on average by 84 percent when it comes to text content. However, binary content faces many challenges and new algorithms have to be developed to overcome these issues. The proposed approach will certainly mitigate the deep web issue. The XML files for each domain used by search engines might be used by web browsers to refresh their cache and therefore help reduce the traffic generated by normal users. This reduces users’ perceived latency and improves response time to http requests.
The study sheds light on the deficiencies and weaknesses of the algorithms monitoring changes and generating binary files. However, a substantial decrease of traffic is achieved for text-based web content.
The findings of this research can be adopted by web server software and browsers’ developers and search engine companies to reduce the internet traffic caused by crawlers and cut costs.
The exponential growth of web content and other internet-based services such as cloud computing, and social networks has been causing contention on available bandwidth of the internet network. This research provides a much needed approach to keeping traffic in check.