Registered business name. If the claim is filed for a partnership, enter the name of the partnership.
Dates of the beginning and end of the tax year for which you are submitting the claim.
Identification number that pertains to your business. If you are a partnership and have a business number (BN), enter the business number. Make sure that you enter all 15 characters of the BN.
Provide the name, telephone number, and fax number of the person best suited to provide the supporting financial information for this claim.
Provide the name, telephone number, and fax number of the person best suited to provide the supporting technical information for this claim.
Complete this part for each project claimed for the tax year. If you wish, you can choose to submit this part for only the 20 largest projects in dollar value, at the time of filing.
However, the CRA may request this part for some, or all, of the remaining projects at a later time. Failure to provide this information will result in the disallowance of the expenditures claimed for the projects as SR&ED expenditures.
In this part of Form T661 you are asked to provide, on a project-by-project basis, information that establishes the nature of your scientific research and experimental development (SR&ED) work. This information enables the CRA to carry out an initial review of the work that you are claiming. This initial review helps to establish, with reasonable confidence, how the claimed work meets the SR&ED eligibility requirements. Establishing this confidence may allow the CRA to expedite the review and process the claim in a timely manner.
CRA recommends that personnel who are familiar with the scientific or technical content of the work you are claiming prepare this part of Form T661.
Start date of the SR&ED project. The start of the project is defined as the point at which scientific or technological uncertainties are identified and the work to resolve those uncertainties commences.
Enter the completion date or expected completion date of the SR&ED project. The completion date of the project is defined as the point at which you:
From the table in Appendix 1, select the field of science or technology that best describes the primary field in which the SR&ED project was attempting to achieve an advancement. Note that the primary field in which the SR&ED project was attempting to achieve an advancement may not necessarily be the same as the field of science or technology of the company project or in which the company carries out its regular business. The field of science and technology you enter on this line is used for statistical and resource management purposes only and not for the purpose of determining eligibility of the work. Appendix 1 is not an exhaustive list of all eligible fields of science or technology.
Indicate whether or not you made an SR&ED claim for this project in any previous tax year. Select only one. If the project is a continuation from a predecessor company, and it is the first time you are making a claim, select line 210 (first claim for the project).
Indicate whether any of the claimed SR&ED project work was done jointly or in collaboration with other businesses. For example, your answer will be yes if the work was done as a joint effort, with or without a formal agreement.
In this section you must provide technical details of your basic research, applied research, or experimental development project. Lines 242, 244, and 246 have word limitations of 350, 700, and 350 respectively. Therefore, your answers should focus on the technical facts and should be in the technical language and style of those who did the actual work, or who understand and are familiar with the work.
Use existing materials and documents generated during the course of your work to extract the pertinent information to complete this section. You should retain these materials and documents so that the CRA can verify that there is a reasonable level of support for the SR&ED work that was performed in the tax year. ( See Appendix 2, “Documentation and other evidence to support your SR&ED claim.” The Self-Assessment and Learning Tool (SALT) might also be helpful in assisting you in identifying and gathering your SR&ED project information.
If you are using approved tax software to prepare your income tax return, your responses to these questions for each project should be included in the software and should not be submitted as a separate attachment.
If you are not using approved tax software to prepare your claim, use a separate sheet if necessary to respond to these questions. Each separate sheet must be clearly labelled with the project title and the question you are answering.
The technological advancements you were trying to achieve with this work were required for:
Materials, devices, or products | Processes | |
---|---|---|
The creation of new |
235 |
236 |
The improvement of existing |
237 |
238 |
The technological objective of this project was to improve data warehouse management techniques by concentrating on the compression of relational database tables. At the time this work began, numerous database compression methods were available and many of these had been commercialized in larger software applications. However, practically all of the methods relied on data being uniformly distributed and static in nature. By contrast, the overwhelming proportion of data entering data warehouses could not be assumed to be uniformly distributed and was almost certainly dynamic in character. We assumed that conventionally available data compression methods, such as the loss-less dictionary approach, could be surpassed by developing methods that would exploit the unique properties of those data sets that were not uniformly distributed and were dynamic. A technological advancement was therefore sought in this project through the development of data compression algorithms based on an analysis of the dynamic character and non-uniform distribution of the data sets entering the data warehouse. This work generated new technological knowledge regarding: * the discovery and use of column value frequency of initial tables rows to create a block-based compression dictionary; * the use of a table-wide list of most frequent values for the compression dictionary; * the restriction of query/update/refresh operations to compressed blocks rather than entire tables; * the organization and control of compression dictionaries in the buffer cache when calls are made to uncompress multiple blocks. The performance of the various prototypes developed in this work was benchmarked using a number of measures based on CPU utilization and data throughput for operations including parallel load, delete/update operations, full table scan, and table access by row ID. One additional outcome of this work was that the dynamic, non-uniform data compression method developed here actually provided performance improvements for data backup and recovery operations when applied to very large databases in excess of 2.5 million rows (1.3 GB) such as those encountered in data warehouses. [320 words]
Describe the scientific or technological uncertainty you encountered that led you to do the SR&ED. Scientific or technological uncertainty means whether a given result or objective can be achieved or how to achieve it, is not known or determined on the basis of generally available scientific or technological knowledge or experience.
In responding to this question, we suggest that you include the objective of the project and describe the scientific knowledge or the new or improved capability you were seeking. Your response should indicate the existing scientific or technological knowledge base at the onset of the SR&ED project. Describe the shortcomings or limitations of that knowledge base that prevented you from overcoming the scientific or technological uncertainties identified. In other words, your response should describe how the uncertainties could not be resolved on the basis of generally available scientific or technological knowledge or experience.
There were a number of specific technological obstacles that drove the systematic investigations described further. We were looking for an appropriate methodology of modeling our dynamic, non-uniform data distribution in real data for the purposes of the compression prototypes. There were no methodologies, techniques, or models available to us to characterize dynamic, non-uniform data. Our review of available techniques revealed in the early phase of the project that we had to undertake investigation leading to the development of a dataset model suitable to reflect in an efficient way our specific dataset characteristics. The second technological shortcoming was that we did not know and we could not find any technique or methodology related to the data compression, which would specifically deal with this data model related to dynamic, non-uniform data. We realized that if we develop a suitable model to characterize dynamic, non-uniform data then we would find no established techniques to be applied to the data compression aspect that would effectively and efficiently exploit the general features of this abstract data model previously mentioned. The effectiveness of each feature had to be verified in terms of data integrity and benchmark performance comparisons. Once a series of candidate compression algorithms became available the subsequent technical shortcomings were associated with the possibility of implementing a dynamic compression technique for dataset additions and/or updates on a batch basis. Finally, we were planning to develop an acceptable and valid methodology of setting up some general rules related to an optimal data table compression-block size applicable to both the initial data set analysis and the dynamic analysis. We felt that such a relationship should exist and we decided to undertake an investigation to be able to prove it. We also realized that such methodology is not readily available so we would have to address this issue and develop a technique potentially leading to determining an optimal data-block size. [314 words]
Following a review of available software methods and dataset characterization techniques, beginning in March 2008 the first phase of the investigations focused on the analysis of a very large data set (known to be dynamic with a non-uniform distribution) in relational database form. This analysis involved a number of investigations, using selected well-known methods in software engineering, with the aim of creating a generalized model of a data set. This also included the extraction of a number of dataset-specific conclusions regarding row and column correlations and distributions, some of which are briefly outlined above in the technological advancements section. At the end of this first phase we found that a reasonably accurate data set model could be created. This was further tested and the data set model accuracy was verified and validated against several concrete smaller-sized relational databases available to us in the data warehouse. In the second phase, starting in May 2008, a number of compression methods were developed in prototype forms to exploit the general features of the data model. Each prototype carried a set of specific assumptions regarding how the dataset characteristics might be exploited and each was subsequently verified for integrity and then benchmarked for performance. This benchmarking was done through measures of CPU utilization and data throughput for parallel load, delete/update operations, full table scan, and table access by row ID. In direct support of this work, several test scripts were written to test the compression algorithm. Although the development of these scripts included no significant technological challenge, they were necessary to benchmark the new algorithms and determine the most appropriate solution. The benchmarking results were documented and are available for further review if requested. The third phase was carried out in June and July 2008. Three candidate compression algorithms were modified to include an implementation of several different dynamic compression techniques for dataset additions and/or updates. Each of these again had the data integrity verified and performance benchmarked, the latter now including update/refresh-specific performance measures. In August 2008, a final prototype was selected for widespread commercial implementation ending this aspect of the experimental development. During October 2008 the implemented prototype was used to determine whether or not an optimal data table compression-block size could be determined by both the initial data set analysis and the dynamic analysis. However, this work failed to establish that such a relationship existed and was subsequently abandoned, ending the project in November 2008. As part of this effort the Company engaged an outside contractor for a period of two months to extend the data compression method to a wider range of common data warehouse operations in September 2008. Included in this work was an exploration into use of the implemented compression prototype for data backup and recovery operations. As the result of this work it was found out and further documented that the prototype provided measurable performance improvements when applied to very large databases in excess of 2.5 million rows (1.3 GB) such as those typically encountered in data warehouses. Subsequent investigations revealed that this was primarily due to the construction of the compression dictionary rather than the data blocks. [521 words]
This website is for purpose of general information only. Please, check the Canada Revenue Agency website to confirm the most recent version.
Some materials were reproduced from the following pages:
www.canada.ca/en/revenue-agency/services/forms-publications/forms/t661.html
www.canada.ca/en/revenue-agency/services/forms-publications/publications/t4088.html
Privacy Policy This website doesn't track or store any personal information, period.