1. INTRODUCTION
Real-time, three-dimensional echocardiography (RT3DE) has recently become available in clinical practice. This has the potential to overcome many of the limitations of 2D echocardiography, e.g. the need to calculate parameters that are inherently three-dimensional (ventricular volume, myocardial mass) using a very small number of 2D slices, or the limited reproducibility of the studies. However, image quality is still a major shortcoming of RT3DE. This problem is for instance caused by the presence of anatomical structures that block the path of the ultrasound beams, or the dependence of signal strength on the relative angle between the beam and the normal to the anatomical surface. Combining apical and parasternal views may improve results: as the angle between the beam and the surface normal varies between views, structures difficult to appreciate in one of the views are likely to be clearer in the other. Recently, we have proposed and validated a way to align apical and parasternal RT3DE images [1]. The main idea behind this work is that the combination of views can be exploited to improve results on image analysis tasks such as segmentation and motion estimation. In this paper, we explore this idea and propose a motion estimation algorithm that combines images from apical and parasternal views. Analysis of myocardial motion in echocardiographic sequences is key to the detection of fundamental pathologies such as ischemic heart disease. Many automatic motion estimation methods have been proposed. However, these are not yet routinely used in clinical practice. Different approaches have been explored, e.g. block matching [2], optical flow methods [3], [4] or feature tracking [5]. Comparing the relative merits of these is a complex issue and will not be addressed in this paper. To our knowledge, previously published methods have only used a single view. The new approach proposed here combines apical and parasternal views to overcome the limitations of single view echocardiography.