Dialysis – Optimal composition of the dialysate, with emphasis on its influence on blood pressure







Introduction. From the beginning of the dialysis era, the most appropriate composition of the dialysate has been one of the central topics in the delivery of dialysis treatment.

Methods. A discussion is employed to achieve a consensus on key points relating to the composition of the dialysate, focusing on the relationships with blood pressure behaviour.

Results. Sodium balance is the cornerstone of intra-dialysis cardiovascular stability and good inter-dialysis blood pressure control. Hypernatric dialysis carries the risk of positive sodium balance, with the consequent possibility of the worsening sense of thirst and hypertension. Conversely, hyponatric dialysis may lead to negative sodium balance, with the possibility of intra-dialysis cardiovascular instability and ‘disequilibrium’ symptoms including fatigue, muscle cramps and headache. The goal is to remove with dialysis the exact amount of sodium that has accumulated in the inter-dialysis interval. The conductivity kinetic model is applicable on-line at each dialysis session and has been proved to be able to improve intra-dialytic cardiovascular stability in hypotension-prone patients. Therefore, it should be regarded as a promising tool to be implemented in everyday clinical practice. Serum potassium concentration and variations during dialysis treatment certainly play a role in the genesis of cardiac arrhythmia. Potassium profiling, with a constant gradient between plasma and dialysate, should be implemented in clinical practice to minimize the arrhythmogenic potential of dialysis. Calcium plays a role both in myocardial contractility and in peripheral vascular resistance. Therefore, an increase in dialysate calcium concentration may be useful in cardiac compromised hypotension-prone patients. Acid-buffering by means of base supplementation is one of the major roles of dialysis. Bicarbonate concentration in the dialysate should be personalized in order to reach a midweek pre-dialysis serum bicarbonate concentration of 22 mmol/l. The role of convective dialysis techniques in cardiovascular stability is still under debate. It has been demonstrated that dialysate temperature and sodium balance play a role and this should be taken into account. Whether removal of vasoactive, middle-sized compounds by convection plays an independent role in improving cardiovascular stability is still uncertain.

Conclusions. The prescription of dialysis fluid is moving from a pre-fixed, standard dialysate solution to individualization of electrolyte and buffer composition, not only during the dialysis session, but also within the same session (profiling) in order to provide patients with an optimal blood purification coupled with a high degree of tolerability.


Haemodialysis (HD) treatment originated in the 1960s, when, for patients affected by end-stage renal disease (ESRD), there was no alternative to death by uraemia in a short period of time. It was very exciting to see that ESRD patients, otherwise destined to dying, did survive with such primitive and empirical renal substitutive treatments. The subsequent two decades were devoted to: (i) improving the dialysis technical equipment and know-how, in order to render treatments safer and better tolerated (alarms with security devices on HD monitors, controlled ultrafiltration, shift to bicarbonate as an acid-buffer and microbiological purity of the dialysate); (ii) studying the pathophysiological basis of blood purification, in order to provide patients with less empirical treatments and optimize the use of the tools offered by technology (quantification of dialysis dose, studies on sodium balance and isolated ultrafiltration and their relationship with intra-HD hypotension). At the same time, the pharmacological treatment of ESRD patients also improved, with the advent of potent and selective anti-hypertensive drugs, active vitamin D compounds, phosphate binders and erythropoietin. These drugs made it possible to control more efficiently the high burden of cardiovascular (CV) disease that affects ESRD patients and to further improve their quality of life. As a result of the exciting advances in the care of dialysis patients and in the practice of renal transplantation, we have observed a dramatic increase in the number of ESRD patients referred to renal substitutive treatments over the last decade. This has led to a sort of ‘negative’ selection of chronic dialysis patients, who now represent a more diseased population, mainly composed of elderly and diabetic patients with a heavy burden of co-morbid conditions, mainly related to systemic vasculopathy and a poor performance status [1]. Therefore, the present challenge of dialysis therapy is to provide treatments with the highest possible bio-compatibility and tailored to individual clinical conditions. In other words, modern dialysis therapy should be personalized to meet the requirements of a hypotension-prone population with poor compliance in terms of water and salt restriction in the inter-HD interval.

This report discusses the most relevant and present issues related to adequate exchanges between uraemic blood and a liquid of known composition, i.e. the dialysate, with particular focus on its influence on intra- and inter-HD blood pressure. A critical appraisal of convective techniques, often advocated as most efficient with respect to CV stability, is provided. A hint to the future is also given with discussion on the possibility of ‘oral dialysis’. Lastly, the Accord Group provides its consensus on important key points.


It is well known that intra-HD sodium removal has very important clinical implications. Changes in the body sodium pool play a pivotal role in the genesis of intra-HD CV instability, on the one hand, and in inter-HD over-hydration and hypertension, on the other [2]. Despite technological advances in dialysis equipment and modalities, CV instability during HD treatment is still an important clinical problem for several reasons including the progressive ageing of the dialysis population, the increased burden of patients with diabetes and CV co-morbidity, and the tendency to shorten dialysis treatment time by increasing blood flow, dialyser surface area and ultrafiltration rates. Hypertension is epidemic in HD centres. Despite improved pharmacotherapy, as many as one-third to one half of HD patients are hypertensive. This is relevant, as hypertension together with anaemia and CV calcification, is the major determinant of CV disease in dialysis patients, accounting for more than 50% mortality [1]. The high frequency of hypertension in HD patients may be attributed, at least in part, to the inability to control body volume with ‘ultra-short’ dialysis and to remove, with dialysis, the exact amount of sodium accumulated in the inter-HD period. To further complicate the picture, given the daily fluctuations in individual dietary sodium and water intake, the dialysis prescription should be different not only amongst patients, but also within individual patients from one dialysis session to another.

In the dialysis prescription there are different parameters that can be modified: amongst these, the sodium concentration in the dialysate plays a pivotal role.

Dialysate sodium content

Sodium crosses the dialysis membrane by diffusion and convection. It is well known that the sodium fractions transported by these two mechanisms are not the same [3], and this is important to keep in mind in order to correctly define intra-HD sodium kinetics and to choose the proper dialysate sodium concentration. In the past, some difficulties arose from the use of different laboratory instruments used to measure the blood and dialysate sodium concentrations (flame photometry or ionometry). Nowadays, dialysis departments generally use direct ionometry, which measures the ionized plasma water sodium concentration in the blood and ionized sodium concentration in the dialysate. It is acceptable for clinical purposes to assume that the plasma water sodium concentration directly measured by ionometry corresponds to the ultrafiltrable fraction, i.e. sodium removable by convection. Diffusive sodium transport depends on the difference between the sodium concentration in blood and dialysis fluid. Because the dialysate is protein-free, the ionized sodium concentration, directly measured by ionometry, can be considered the concentration suitable to calculate the trans-membrane sodium diffusion gradient from the dialysate site. Given the Donnan effect, mainly due to the negative charge of plasma proteins, the blood sodium concentration measured by direct ionometry should be corrected for a Donnan factor of 0.967 in order to determine the actual sodium concentration available for diffusion from the blood. When the latter is known, it is possible to change the dialysate sodium concentration in order to obtain an ‘isonatric’, ‘hyponatric’ or ‘hypernatric’ dialysis [3]. When dialysate sodium activity corresponds to plasma water sodium activity multiplied by 0.967 (‘isonatric’ dialysate) and there is no ultrafiltration, the net intra-HD sodium removal is zero. Hyponatric dialysate can theoretically be used if the patient should lose sodium by diffusion. This was usual clinical practice in the past. We feel that this practice should be avoided at present. In fact, secondary to the loss of sodium by diffusion, plasma osmolarity decreases with two consequent possible side effects: (i) cellular overhydration, caused by osmotic fluid shift from the extracellular to the intracellular compartment, which significantly contributes to the so-called disequilibrium syndrome (fatigue, ‘washed-out’ feeling, muscle cramps, headache, neurological symptoms); (ii) intra-HD hypotension, caused by insufficient refilling of the intra-vascular compartment, depleted by ultrafiltration, from the interstitium and the intracellular space. Hypernatric dialysate is more frequently used in order to avoid excessive sodium losses due to ultrafiltration and prevent CV instability. In fact, when the sodium concentration in the dialysate is higher than the patient pre-HD blood sodium concentration, the patient is supplied with sodium via diffusion for as long as necessary to equalize the difference in concentrations. In this case, the diffusive sodium transport to the patient counteracts the convective sodium removal due to ultrafiltration. However, ‘hypernatric’ dialysis carries the drawback that it may cause insufficient net sodium removal, and consequently favour the development of refractory hypertension. Moreover, it can trigger an intensive sense of thirst, causing high water intake in the inter-HD interval, which has to be treated with high ultrafiltration rates during HD which, in turn, favours hypotensive episodes. Intra-HD hypotension may prevent dry body weight achievement and prompt the dialysis staff to administer hypertonic saline or to increase dialysate sodium concentration: a vicious circle may be established, resulting in cardiac failure and/or pulmonary oedema. Hence, the goal of dialysis is to remove the exact quantity of sodium that has accumulated in the inter-dialysis period in order to reach a zero balance. According to the single pool sodium kinetic model, zero sodium balance between the intra-HD sodium removal and the inter-HD sodium accumulation can be achieved by individualizing the dialysate sodium concentration for each dialysis, to reach a constant end-dialysis plasma water sodium concentration and applying a rate of ultrafiltration equal to the interdialytic increase in body weight [4]. Unfortunately, this model is not suitable for routine use because of the need for pre-dialysis plasma water sodium concentration at each dialysis session. Consequently, the most frequent therapeutic measure against intra-HD hypotensive complications is still the hypernatric dialysate application. The downside of this is the risk that the patient leaves the dialysis treatment with hypernatraemia, having to take in fluid to compensate the sensation of thirst, i.e. the therapy as such causes excess fluid intake. On the other hand, sodium therapy is rapidly effective and is extremely favourable from a cost effective point of view.

Sodium profiling

During the past several years many authors have proposed the use of sodium profiles in order to improve intra-HD CV instability and minimize the possible complications of high sodium dialysate. The rationale is that: (i) by increasing the conductivity, i.e. sodium concentration, of the dialysate, sodium doses of different levels and with different kinetics can be preventively administered to the patient (application phase); (ii) the amount of sodium, which has been administered during the application phase, is subsequently removed during the second phase (removal phase).

Some authors [5,6] have proposed ‘sodium ramping‘, which consists of the early use of a higher concentration of sodium in the dialysate than that found in the blood, with the sodium content in the dialysate being decreased continuously or in a stepwise manner as the session progresses. Frequently, sodium profiles combine a variation of dialysate sodium concentration during HD sessions with a variation of ultrafiltration rate. Generally, a higher ultrafiltration rate is combined with a higher dialysate sodium concentration in the first phase of dialysis, which is reversed in the final phase [7]. The introduction of continuous blood volume monitoring on-line during HD has offered the opportunity of changing both the ultrafiltration rate and the dialysate sodium concentration, in order to maintain a constant blood volume reduction rate, considering hypovolaemia as the major cause of intra-HD hypotension [8,9].

When sodium profiling was compared with standard HD treatments, the results generally showed an improvement in CV stability. However, this was frequently associated with higher post-HD plasma water sodium concentration, higher pre-dialysis body weight and higher blood pressure values as a result of inadequate sodium removal [6,7].

In the study by Oliver et al. [7], the patients randomized to a profiled treatment were submitted to a progressive decrease in dialysate sodium concentration from 152 mmol/l at the start to 142 mmol/l at the end of the dialysis session, compared with a constant dialysate sodium concentration of 142 mmol/l for the patients randomized to conventional treatment. This implied a marked difference in sodium removal, as also expressed by a higher post-HD serum sodium concentration and pre-HD body weight observed in the profiled treatment group.

Many of the studies that evaluated the effect of different sodium profiles on CV instability did not take into account the sodium balance, as also evidenced by the fact that in many studies there was insufficient information to calculate sodium removal, including the methods used to determine sodium concentration. In other words, whilst sodium profiling may be associated with improved intra-HD CV stability, the pathophysiological basis remains ill understood. Sodium profiling has not solved the issue of sodium balance.

Conductivity kinetic model

In 1980, Gotch et al. [4] developed a single-pool sodium kinetic model for HD in order to obtain a zero sodium balance over the treatment cycle (intra-HD sodium removal = inter-HD sodium accumulation). Using flame photometry to determine total sodium concentrations, this early analytical single-pool kinetic model showed a degree of imprecision of ∼2.8 mmol/l in predicting end-dialysis plasma water sodium concentration. This means for a final total body water volume of 40 l (58% for 70 kg body weight) an imprecision of ∼112 mmol in predicting sodium removal with dialysis. However, on the basis of Gotch’s theoretical premises, Di Filippo et al.[10] demonstrated more recently, using direct ionometry, a level of imprecision in predicting sodium balance of <34 mmol. Unfortunately, the model is unsuitable for routine clinical use because of the need for blood sampling and laboratory determinations at each dialysis session.

Given the linear correlation between the conductivity of dialysate and its sodium content, the conductivity values can be used instead of sodium concentration values. According to the basic theory developed by Polaschegg [11] and Petitclerc et al. [12], if the dialysate conductivity is measured at the dialyser inlet and outlet ports, at two different inlet conductivity values, ionic dialysance and plasma water conductivity can be easily calculated without the need of blood samples and laboratory determinations. The sodium kinetic model can therefore be changed into the conductivity kinetic model, allowing the prediction of final plasma water conductivity, when the dialysate conductivity is known, and of the dialysate conductivity required to obtain a desired final plasma water conductivity.

There are modules in some HD machines that are capable of determining ionic dialysance and plasma water conductivity on-line, allowing routine application of the conductivity kinetic model. It has been shown that the conductivity kinetic model has a level of imprecision in predicting end-dialysis plasma water conductivity of <∼0.14 mS/cm, roughly equivalent to ∼1.4 mmol/l in terms of ionized plasma water sodium concentration and to ∼56 mmol in terms of sodium balance [13]. Paired filtration dialysis (PFD) is a special case of haemodiafiltration (HDF) in which convection and diffusion take place separately, allowing on-line monitoring of ultrafiltrate conductivity. A multi-centre, prospective, controlled and randomized trial [14] demonstrated that the attainment of ‘constant’ end-dialysis ultrafiltrate conductivity values, by means of the application of a dedicated conductivity kinetic model [15], makes it possible to improve CV stability in HD patients prone to intra-HD hypotension. We would like to stress that the improvement in CV stability was obtained by reducing the variability of end-dialysis ultrafiltrate conductivity, mainly related to day-to-day variations in sodium intake. In fact, these results were obtained by simply modifying the dialysate conductivity in order to reach an ultrafiltrate conductivity at the end of each session that was equal to the mean value determined in the same patient during the run-in period. This value was not different between the two treatment arms, but the variability of end-dialysis ultrafiltrate conductivity was lower during the experimental treatment than during the conventional treatment.

In the future, it will probably be possible, due to the progressive development of automatic systems, to combine the benefits of sodium profiling, minimizing the complications of hypernatric dialysis, with the benefits of the conductivity kinetic model, providing an adequate sodium removal.


In order to be adequate, potassium removal during dialysis should be equal to the amount accumulated during the inter-dialytic period. However, as it is very difficult to quantify both inter-dialytic loading and intra-dialytic removal in clinical practice, potassium removal by dialysis is termed satisfactory if predialysis hyperkalaemia is avoided. For stable patients, a dialysate potassium of 2 mmol/l is considered advisable in order to keep pre-HD plasma potassium levels below 6 mmol/l. On the other hand, the safety of dialysate potassium concentration is also related to the avoidance of hypokalaemia and dialysis-induced arrhythmias. Post-HD serum potassium concentration is influenced not only by the pre-HD serum potassium concentration and the dialysate potassium concentration, but also by plasma tonicity and changes in plasma tonicity following HD. For an excellent discussion on various aspects of changes in water and electrolyte homeostasis in dialysis, the reader is referred to a recent review by Redaelli [16].

Current dialysis practice is rather uniform and does not take into account pre-HD serum electrolyte (especially potassium) levels. In a recent study, 17% of the patients with a pre-HD serum potassium <4.0 mmol/l had a very low dialysate potassium concentration (0–1 mmol/l); conversely, only 18% of the patients with such low potassium dialysates had a pre-HD serum potassium concentration >5.0 mmol/l [17].

Dialysate potassium–arrhythmia relationship

Depending on studies and the definition of the ventricular ectopic activity (VEA), the incidence of ventricular ectopic beats ranges from 18 to 76% in ESRD patients. However, a study by Sforzini et al. [18] failed to demonstrate a significant correlation between mortality and VEA in patients followed-up for 4 years.

Therefore, it has been questioned whether the occurrence of complex ventricular arrhythmia is influenced by dialysate potassium concentration. One investigation in 74 long-term HD patients (mean age: 44.5 years) failed to show, on multivariate analysis, any influence of dialysate potassium concentration [19]. Another study performed in 78 HD patients claimed an impact of low potassium concentration in the dialysate [20]. Thus, this issue is still controversial. From this perspective, it is important to investigate the influence of serum potassium and of its changes on QT-segment duration and dispersion, and on the occurrence of late ventricular potentials.

Serum potassium changes influence QT duration and dispersion

QT dispersion (maximum minus minimum QT interval on standard 12-lead electrocardiogram) is a marker of ventricular repolarisation variability. It is known to be increased in various ‘high-risk’ groups, such as diabetic patients, patients with heart failure and patients with essential hypertension. QT dispersion is also observed in individuals with hypertrophic cardiomyopathy and episodes of ventricular tachyarrhythmia. In ESRD, which is well known to be associated with an increased risk of malignant arrhythmias, there are relatively few data on the impact of HD sessions on QT dispersion. Cupisti et al. [21] investigated 20 HD patients without significant co-morbidity. They found no correlation between the degree of intra-HD serum potassium concentration changes and the increase in QT dispersion. A study by Covic et al. [22] looked for the effects of a HD session on QT interval and QT dispersion in 68 patients without cardiac co-morbidity. Patients in whom the dialysis session led to an increase in QT interval started HD with significantly lower serum potassium and higher serum ionized calcium concentrations, and displayed a greater reduction in serum calcium concentration following HD. Only 39 subjects had an increase in QT dispersion after dialysis, while in 29 a stable/decreased QT dispersion was recorded. Patients with greater increases in QT dispersion following HD had lower pre-HD serum potassium levels (r = 0.28, P = 0.018), although there was no significant relationship between the change in serum potassium concentration and the change in QT dispersion. Thus, it appears that the main predictor of changes in QT interval, across a single dialysis session, is potassium-related, i.e. associated with pre-HD serum potassium levels [22].

Although the above presented data would suggest that intra-HD serum potassium changes do not affect QT dispersion, these studies were not aimed at elucidating this relationship. Indeed, Cupisti et al. [23] elegantly showed that during 1 h high-rate pure ultrafiltration, QT dispersion did not change significantly (despite a parallel increase of the sympathetic overdrive); in contrast, an evident increment in QT dispersion was observed 1 h after switching to diffusive HD, slowly progressing until the end of the HD session and returning to baseline values within 2 h of the end of the HD session. Most importantly, this increase in QT dispersion was totally blunted during a standard HD with no plasma potassium decrease (potassium dialysate profiling) and reappeared when the dialysate potassium concentration was restored to 2.0 mmol/l [23].

In summary, the pre-dialysis serum potassium concentration may play a role in certain subsets of patients. Therefore, intra-HD serum potassium exchanges are potentially relevant when they induce a certain decrease in serum potassium concentration with each individual having a different arrythmogenic level.

Influence of potassium changes on ventricular late potentials

Signal averaged electrocardiography (SAECG) has been developed in recent years with the purpose of detecting ventricular late potentials non-invasively, to identify patients at risk of sudden death or ventricular tachycardia. Ichikawa et al. [24] performed SAECG in 42 patients before and after HD. There was a significant correlation between the changes in low amplitude signal under 40 μV in the latter part of QRS (LAS40) and those in serum potassium concentration during HD. Furthermore, they compared the relationship between LAS40 and pre-HD serum potassium concentration. In the low-potassium group of patients (pre-HD potassium concentration <5.0 mmol/l), there was no significant correlation between variation in LAS40 and the change in serum potassium concentration. The opposite was true for the high pre-HD serum potassium concentration group. Thus, SAECG indices worsened during HD sessions. An insufficient decrement of serum potassium concentration by HD is suggestive of an arrythmogenic effect in patients with high pre-HD serum potassium concentrations [24].

Can we change the increased arrythmogenic potential of HD patients by manipulating the kinetics of intra-HD potassium removal?

Redaelli and colleagues [25] performed a prospective, randomized, cross-over study, specifically designed to clarify whether a new mode of HD potassium removal, using a decreasing intra-HD dialysate potassium concentration and a constant plasma-dialysate potassium gradient throughout the HD session, is capable of reducing the arrhythmogenic effect of standard HD. This was compared with standard HD sessions (i.e. constant dialysate potassium concentration and decreasing plasma-dialysate potassium gradient). In the potassium-profiled dialysis treatment, the initial dialysate potassium concentration had to be by 1.5 mmol/l lower than the serum potassium concentration, and decreased exponentially to 2.5 mmol/l at the end of HD. Although the initial plasma-dialysate potassium gradient was 2.3 times lower during the potassium-profiled dialysis than during standard HD, pre-HD potassium levels were not adversely influenced. The potassium-profiled dialysis decreased the number of premature ventricular complexes/hour, and premature ventricular complex couplets/hour by 36 and 32%, respectively (P<0.05) [25]. Thus, potassium-profiling of the dialysate may be an efficient tool to decrease the arrythmogenic effect of HD.

Influence of dialysate potassium on intra-HD blood pressure

Low serum potassium is a risk factor for CV diseases, including hypertension in the non-renal general population. There are several potential reasons to incriminate dialysate potassium concentration in blood pressure reduction during HD sessions: (i) hypokalaemia may exacerbate autonomic dysfunction; (ii) hypokalaemia alters cardiac inotropism; and (iii) intra-HD potassium loss accounts for a modest decrease in total osmoles.

Direct evidence tended rather to negate the impact of dialysate potassium on blood pressure [26]. Dolson and co-workers [26] analysed the effect of three different dialysate potassium concentrations (1, 2 and 3 mmol/l, respectively) in 11 HD patients. Blood pressure decreased during the HD session as excess fluid was removed, regardless of the dialysate potassium concentration. Significant increases in blood pressure occurred 1 h post-HD, compared with the levels determined at the end of the dialysis session, when 1 and 2 mmol/l potassium levels in the dialysate were used, but not when 3 mmol/l was used [26].


Calcium mass balance studies showed that in a patient with a normal serum calcium before HD, dialysate calcium concentrations of 1.25 (low concentration) and 1.75 mmol/l (high concentration) result, respectively, in a negative and a positive calcium balance [27]. Calcium ions play a pivotal role in the contractile process of both vascular smooth muscle cells and cardiac myocytes. However, because of the frequent use of calcium salts as phosphate binders, it is often advised to use low dialysate calcium concentrations to prevent or treat hypercalcaemia [28,29]. In general, a calcium concentration of 1.50 mmol/l is advocated. Several studies in dialysis patients have shown that changes in serum ionized calcium levels have significant haemodynamic effects [30,31]. The effect of dialysate calcium concentration on blood pressure seems to be mediated predominantly through changes in myocardial contractility, although several investigators also found a change in vascular reactivity [3133]. A recent study reported a significantly larger decline in blood pressure during dialysis with low dialysate calcium concentration, compared with high dialysate calcium concentration, in patients with normal cardiac function, which was mainly related to decreased left ventricular contractility with the use of low calcium dialysate [34]. In another study in cardiac-compromised patients, the effect of, respectively, low calcium and high calcium dialysate on systolic blood pressure course was compared [35]. Systolic blood pressure decreased to a statistically and clinically significant degree during ultrafiltration + HD with the use of low calcium dialysate. In contrast, during high calcium dialysis in the same patients, mean arterial pressure remained stable. This effect was mediated by direct changes in myocardial contractility, whereas systemic vascular resistance remained unchanged. Therefore, an increase in dialysate calcium concentration has a clinically important positive effect on the blood pressure response during ultrafiltration + HD in cardiac-compromised patients. According to these results, it is conceivable to suggest the use of a 1.75 mmol/l calcium dialysate as a strategy for improving haemodynamic stability in cardiac-compromised patients. Calcification in these patients will probably not affect life expectancy immediately, whereas by preventing intra-HD hypotension, a high dialysate calcium concentration of 1.75 mmol/l might have beneficial effects on the quality of life. Moreover, the advent of calcimimetics may allow in the future the renewed use of higher dialysate calcium concentrations than at present.

Acid buffering

As the kidney is a key organ of hydrogen ion (H+) handling, metabolic acidosis is one of the main complications of uraemia. The endogenous production of H+, mainly related to protein ingestion, is ∼0.77 mmol/g of protein catabolized [36]. H+ accumulation in the blood of uraemic patients is buffered by plasma bicarbonate, which is used as a surrogate marker of acidaemia. Recently, comparing total CO2 assessment with the same method but at a different time and place, it has been pointed out that delayed sample handling may jeopardize the results and the interpretation of the acid–base status of dialysis patients [37]. The blood sample for bicarbonate (as total CO2) assessment should be quickly treated, protected from air contact, and assessed in the same laboratory. A relationship between protein intake and plasma bicarbonate concentration, and between nutritional markers and plasma bicarbonate concentration has been reported recently [38,39]. At least on a short-term basis, paradoxically a decrease in plasma bicarbonate level is associated with a more favourable constellation of nutritional markers because of adequate protein intake. Also, it has been reported that adjusted survival of HD patients is decreased when pre-HD bicarbonate is below 18 mmol/l and above 24 mmol/l [40]. The recommended goal proposed by K-DOQI guidelines for pre-HD midweek plasma bicarbonate level is 22 mmol/l [41].

Contribution of dialysis to correct metabolic acidosis in ESRD patients occurs through buffer supply, mainly bicarbonate, rather than through H+ clearance. Diffusive influx of buffer into the patient has been used since the beginning of the dialysis era. Because of precipitation with calcium and magnesium and because of the risk of bacterial contamination, bicarbonate was rapidly abandoned and replaced by acetate during the first two decades of dialysis therapy. The key advantages were its equimolar conversion to bicarbonate, a bacteriostatic effect and low cost. However, acetate induced side-effects have been reported in a large number of studies during the 1980s, when high-efficiency HD was progressively used, due to the limitations in hepatic acetate metabolism in at least part of the patients. This problem has been explored by Vinay et al. [42] by analysing transient changes in plasma total CO2 during and after the HD session, reflecting the replacement of bicarbonate loss into the dialysate by the conversion of acetate into bicarbonate. In this study, a restricted number of patients (10%), especially women, were unable to metabolize the acetate load correctly. Even though a number of comparative studies of the use of acetate and bicarbonate as the buffer for HD treatment have led to controversial conclusions, untoward effects of acetate have been frequently reported, such as hypoxaemia, vasodilatation, depressed left ventricular function (enhancing the risk for hypotension episodes during dialysis treatment) and impaired lipid and ketone acid metabolism. Both technical improvements, avoiding carbonate salt precipitation, and the development of high flux/high efficiency dialysis with worsened acetate-induced side-effects have progressively led to the reintroduction and generalization of bicarbonate as the preferred dialysate buffer in the last two decades. Bicarbonate is currently the buffer used regularly in HD procedures. The bicarbonate flux from the dialysate to the patient is determined by the trans-membrane concentration gradient and by bicarbonate dialysance. The usual average dialysate concentration is 35 mmol/l, obtained from proportioning dialysis stations that mix bicarbonate from solution or dry powder to water and an ‘acid’ compartment containing a small amount of acetate or lactate and sodium, potassium, calcium and magnesium. Usually, a plateau of plasma bicarbonate is reached after 2 h of HD at an approximate value of 27–30 mmol/l. In the recent French cross-sectional study quoted above [38], midweek pre-HD plasma bicarbonate averaged 22.8±3.5 mmol/l in 7123 patients. The fate of plasma bicarbonate during the inter-HD period has been studied by Graham et al. [43] in nine patients treated with a dialysate bicarbonate concentration of 35 mmol/l. The time-averaged concentration of plasma bicarbonate was calculated at 27.0±1.2 mmol/l. These authors also compared the pre-HD value of plasma bicarbonate in 46 patients after a 2 and a 3 day interval, respectively. The pre-dialysis plasma levels were significantly lower with the 3 day interval (22.1±0.6 vs 23.0±0.5 mmol/l).

Because of the known consequences of metabolic acidosis for the nutritional status of HD patients, higher concentrations of dialysate bicarbonate (39–48 mmol/l), resulting in a significant increase in pre-HD plasma bicarbonate, have been proposed to improve the control of acidosis. This manoeuvre improved protein turnover [44] and triceps skin fold thickness [45], and increased serum branched-chain amino acids [46], but had no effect on serum albumin and total lymphocyte count [47].

At the end of the 1980s, convective techniques were developed for the treatment of dialysis patients, with the goal of improving session tolerance and larger molecule clearance. Acetate-free biofiltration (AFB) uses a dialysate without buffer, with concomitant post-dilution infusion of sterile sodium bicarbonate solution (145 mmol/l). The net bicarbonate flux into the patient is the result of efflux from diffusion (no buffer in the dialysate) and high-flow convection, and influx from infusion. In a multi-centre trial, Albertazzi et al. [48] reported a sodium bicarbonate infusion amount of 7.96±0.61 l per treatment. Santoro et al. [49] studied the factors influencing post-HD plasma bicarbonate level with biofiltration. These factors included the amount of infused sodium bicarbonate, the pre-HD plasma bicarbonate level and body weight (positive relationships), as well as the dialysis time and blood flow (negative relationships). A mathematical model was derived from this study, indicating the amount of sodium bicarbonate to be infused during the dialysis session to reach a given value of post-HD plasma bicarbonate level. In a cross-over multi-centre study, it has been reported that AFB provided a better acid–base control in diabetic patients, when compared with standard bicarbonate dialysis [50]. Movilli et al. [51] reported a better acid–base control in the elderly with AFB, compared with standard bicarbonate HD or HDF.

On-line HDF uses a sterile bicarbonate-buffered dialysate as infusion solution to compensate for the high convective flow with this technique. The net bicarbonate flux into the patient reflects the difference between high bicarbonate efflux from convective flow and bicarbonate influx from the dialysate and substitution fluid infused in pre- or post-dilution mode. The bicarbonate concentration in dialysate and substitution fluid ranges from 27 to 35 mmol/l. The influence of the infusion site (pre- or post-dilution, or mixed) on acid–base status have led to controversial results.

In conclusion, acid–base status in HD patients is estimated from plasma bicarbonate. It reflects not only protein catabolism, but also protein intake. As long-term exposure to acidosis may alter nutritional status by enhancing protein turn-over, individualization of buffer prescription is desirable in order to avoid metabolic acidosis and post-HD alkalosis. The usual bicarbonate concentration in dialysate is 35 mmol/l. It should be adapted to reach mid-week pre-HD plasma bicarbonate values, from an adequately handled sample, of 22 mmol/l.

CV stability with convective therapies

Convective therapies, based on pressure-induced water and solute flow across high-flux membranes, appear to be associated with better CV tolerance in response to sodium and water removal than conventional HD, which is mainly based on diffusive transport [52]. Conventional haemofiltration (HF) or HDF require the infusion of large amounts of sterile and pyrogen-free replacement fluid into the bloodstream to compensate for the high ultrafiltration rates necessary to obtain efficient water and solute removal. Whilst on-line HF or HDF methods, based on the ‘on-line’ production of replacement fluid by filtration of dialysis fluid itself, result in cost saving procedures, microbiological contamination of water needs to be frequently monitored to assure water quality and safety. To counterbalance the risk of back-filtration, ultrapure dialysis fluid is a necessary requirement for HD with high-flux membranes and also for on-line treatments. With respect to solute transport, diffusion is particularly efficient for clearing low molecular weight, non protein-bound solutes, whilst convective transport is more efficient for middle and high molecular weight molecules.

Some advantages of convective techniques are related to improved elimination of β2-microglobulin, which could prevent the development of dialysis-related amyloidosis [53], advanced glycation end-products, hormones and their metabolites (PTH, insulin, glucagon, GH, calcitonin, gastrin, prolactin, FSH, LH) and of some inflammatory molecules (interleukins, TNF, endotoxins).

The pathophysiological mechanisms that influence CV reactivity in convective therapies are elegantly reviewed by Santoro et al. [54]. Apart from the use of biocompatible high-flux membranes, which do not activate complement, remove high amounts of vasodilating substances and improve sympathetic and baroreceptor function, the two most beneficial aspects of convective therapies in comparison with conventional HD are lower core body temperature and less negative sodium balance, depending on the concentration of sodium in the substitution fluid that ideally preserves blood volume in conjunction with the maintenance of plasma osmolality.

An optimal adaptability of systemic vascular resistance is necessary to maintain blood pressure in the normal range during dialysis ultrafiltration. Comparing HF with high-flux HD, there is a significant increase in heart rate and cardiac index, and a fall in systemic vascular resistance with the latter, whilst systemic vascular resistance is maintained with the former [55]. At the same time, intra-vascular volume is better preserved with HF. Comparing the sensitivity index of blood pressure to blood volume changes during HF and HD, it has been found that for an equivalent blood volume loss there is a significantly smaller decline in systolic blood pressure with HF than with HD [54].

Sodium removal with convective therapies

The better haemodynamic tolerability during HF/HDF may be due to the lower sodium removal obtained when compared with HD. According to Locatelli data, there is a clear difference in the estimated sodium removal expressed as mmol/l of water between HD and HF [56]. For a similar sodium concentration in the dialysate and the reinfusate (140 mmol/l) and a sodium plasma concentration ranging from 136 to 142 mmol/l, sodium removal is always lower in HF than in HD mode. For a concentration of 140 mmol/l in the infusate on HF, an equivalent sodium removal during HD can be obtained only if the concentration of dialysate sodium is increased to 144 mmol/l.

In HDF, sodium kinetics and plasma water sodium concentration depend on three variables: (i) the sodium gradient between plasma water sodium and dialysate sodium; (ii) the sodium concentration of infusate; and (iii) the ultrafiltration rate. During HDF, high ultrafiltration rates lead to additional sodium retention as a result of the Donnan effect and this may be counterbalanced by increased sodium removal by diffusion, in order to maintain an adequate balance [57]. Sodium kinetics during HDF were studied using a computer model that predicts changes in plasma water sodium concentration and net transfer of solute during treatment. This was achieved by using different dialysate and infusate sodium concentrations with variable sodium content and infusion rate. Pedrini et al. [57] showed that the amount of sodium infused with the infusate may lead to changes in the plasma water sodium concentration that have significant effects on solute transfer by diffusion and convection. Their observations confirm these findings showing that in order to achieve target sodium removal, higher trans-membrane sodium gradients are needed on HDF.

Different thermal effects of convective therapies

The other significant aspect that improves the tolerance to fluid removal in convective therapies, compared with standard HD, is the difference in thermal energy balance between these treatment modalities. During conventional HD at 37.5°C, there is an energy transfer to the patient that increases body temperature, which in turn leads to dilatation of resistance vessels, impairing the physiological response to fluid removal [58]. Cooling dialysate clearly improves vascular reactivity and some studies have shown a significant decrease in the incidence of hypotensive episodes during HD sessions using cool-temperature dialysis fluid [59,60]. On the contrary, during HF it is necessary to infuse into the patient a large amount of sterile replacement fluid in bags that tend to equilibriate with room temperature. The incidence of hypotensive episodes has been strongly related to thermal changes comparing post-dilution HDF and HD and was found to be dependent on the amount of replacement fluid infused [59]. However, in on-line therapies, the temperature of the infusate can be easily modified by adjusting the dialysate temperature. Comparing the patients’ body temperature profile in on-line HDF vs standard HDF, the temperature of the blood re-entering the patients was on average 1.5°C higher during on-line HDF than on standard HDF.

Clinical studies

Several studies, such as the Sardinian multi-centre study comparing pre-dilution on-line HF with high-flux HD with identical Kt/V and treatment times, showed a better haemodynamic stability together with a clear fall in the frequency of hypotensive episodes during HF compared with HD [61]. However, in this study, mean blood pressure appeared to be higher, both pre- and post-session, during the HF treatment periods. Thus, HF also appears to provide a better vascular stability, with a lower tendency to hypotension during the sessions, but the price to be paid is higher pre-dialysis blood pressure.

Other studies have demonstrated that on-line HDF with production of replacement fluid from the dialysate is safe and provides superior removal over a wide molecular weight range than high-flux HD [62]. Over a follow-up period of 12 months, removal of both small (urea and creatinine) and middle (β2-microglobulin) molecules was significantly higher with HDF than with high-flux HD, but this did not result in lower pre-treatment plasma concentrations [60]. In this study, 44 patients were randomized to on-line HDF or high-flux HD and three of them who were assigned to on-line HDF needed to be withdrawn from the study due to worsening of hypertension, which increased from 156/86 mmHg at baseline to 173/93 mmHg at withdrawal. A post-dialysis blood pressure value of 240/120 mmHg was observed in one patient. However, excluding these three patients, there was no significant difference in blood pressure values over the course of the study. Worsening hypertension has been observed previously in a few patients treated with HDF, and this may be due to sodium retention, namely when the replacement fluid has the same sodium concentration as the dialysis fluid, as it occurs in on-line HDF. On the other hand, a better CV tolerance to water and sodium removal during on-line HDF, with a significantly lower incidence of symptomatic hypotensive episodes requiring saline or hypertonic solutions infusion, has been observed. In this study there were no differences in pre- and post-HD blood pressure, comparing on-line HDF and HD [62]. In a large epidemiological study comparing morbidity and mortality between convective and diffusive therapies in 6444 ESRD patients from the Lombardy Registry, Locatelli and co-workers [53] found a significant delay in the requirement for carpal tunnel surgery, as a marker of dialysis-related amyloidosis, in patients treated with convective therapies as compared with patients on standard HD.

Oral dialysis: is it a realistic goal for the future?

There are approximately one million people in the world who are alive just because they have access to one form or another of renal replacement therapy [63]. Ninety percent of them live in the developed countries. Uraemia from kidney failure is a complex syndrome, which requires the regular removal of waste metabolites and adjustment of water and electrolytes. Standard treatment by dialysis or transplantation is effective but expensive and most of the developing countries cannot afford this. Several attempts have been made to find alternative methods to correct uraemia. There are three major approaches. One is the use of adsorbents to bind urea directly. A second approach is the use of co-immobilized enzyme urease to break down urea into ammonia, which is then removed by adsorbents. The third method is the use of micro-encapsulated multi-enzyme systems to convert urea and ammonia into essential amino acids. However, the problem with the first approach is that a very large dose of adsorbent is required. Similar is the case with the micro-encapsulated urease-zirconium phosphate system. The third approach lacks a sufficient conversion rate. Thus, there is a clear need for an efficient system of urea and ammonia removal.

In the last two decades there has been an explosive increase in molecular biology research. As a result, a number of genetically engineered micro-organisms with many special features have become available. However, there are safety concerns about introducing genetically engineered micro-organisms into the human body for therapeutic applications. This problem can be overcome by the use of semi-permeable microcapsules as artificial cells to microencapsulate genetically engineered cells for oral administration.

Recent studies have shown that micro-encapsulated, genetically engineered Escherichia coli DH5 cells containing Klebsiella aerogenes urease gene can effectively remove urea, ammonia, uric acid, sodium, potassium, chloride, phosphate and cholesterol from uraemic plasma in vitro [64]. However, creatinine removal was very modest. Prakash and Chang [65] therefore studied the oral use of micro-encapsulated, genetically engineered E.coli DH5 cells in rats with renal failure surgically induced. When given orally, the cells remained at all times in the microcapsules and were finally excreted in the stools. During their passage through the intestines, small molecules like urea diffuse rapidly into the microcapsules and are acted upon by genetically engineered cells. This lowered the high plasma urea in rats with renal failure to normal [65]. In the control group of uraemic rats, 75% died after 67 days. Remarkably, the uraemic rats that received micro-encapsulated genetically engineered E.coli DH5 cells all survived the treatment period. Calculations based on the result of this study led to the hypothesis that a 70 kg patient with the same degree of uraemia as rats, would need to ingest 4 g of E.coli DH5 cells orally each day to achieve a similar result. This research has exciting implications for the use of this and other types of genetically engineered cells, not only in patients with ESRD, but also in a number of other disease settings.

Final accord

After intensive discussion, the panel reached consensus on the following key points.


Sodium balance plays a central role in the delivery of dialysis treatment and should be always considered first when studying CV behaviour in the intra-dialytic and inter-dialytic intervals.

A negative sodium balance may contribute to low CV stability during HD treatment as well as symptoms related to disequilibrium syndrome (fatigue, muscle cramps, headache). On the other hand, a positive sodium balance can reduce intra-dialytic side effects, though possibly by increasing inter-dialytic side effects (thirst, weight gain, hypertension and eventually development of cardiomyopathy and pulmonary oedema).

In order to reduce intra-dialytic and inter-dialytic morbidity, it is important that, at the end of each dialysis session, the amount of sodium removed by dialysis matches exactly the amount of sodium accumulated in the inter-dialysis interval (zero balance). This is not so easy to achieve, given the highly variable amounts of sodium introduced during inter-dialytic periods.

Sodium profiling is promising, but it has to be combined always with the implementation of a sodium kinetic model in order to avoid sodium accumulation.

The classical single-pool sodium kinetic model is not applicable to everyday clinical practice, as it requires frequent blood sampling and complex calculation. However, the sodium-conductivity kinetic model, by means of the on-line determination of ionic dialysance, allows removal, in each patient and at each dialysis session, of the amount of sodium accumulated in the inter-dialytic period and is likely to improve CV stability in patients prone to intra-HD hypotension.


Good epidemiological data concerning the impact of potassium disturbances on morbidity and mortality in HD patients is lacking. An adequate investigation of this issue will have to take account of whether the patient is diabetic or not and to provide exact information on concomitant medications.

Whatever the epidemiological data, current dialysis practice is rather uniform and does not take into account pre-HD electrolyte (especially potassium) serum levels.

Although current evidence questions the relevance of VEA as a risk factor for mortality, the arrythmogenic potential (ventricular ectopic beats, QT dispersion and late ventricular potentials) appears to be influenced by serum potassium levels and intra-HD changes of serum potassium.

From this perspective, potassium profiling using a constant serum-to-dialysate potassium gradient has been demonstrated to prevent VEA, without adversely changing inter-dialytic potassium levels. Therefore, it should be implemented as a standard dialysis procedure.

The relationship between potassium and blood pressure is weak, with the suggestion of a negative influence of low-potassium dialysate on intra-HD CV stability. We recommend that, at present, potassium dialysate be maintained at 2.5–3.0 mmol/l and dialysis machines with potassium profiling be introduced in the near future.


Calcium ions play a pivotal role in the contractile processes of both vascular smooth muscle cells and cardiac myocytes. Consequently, changes in serum ionized calcium levels may have significant haemodynamic effects.

High dialysate calcium concentration exerts a clinically important, positive effect on the blood pressure response of cardiac-compromised patients and should be used as a clinical strategy in this patient population, considering that it is uncertain to date whether soft tissue calcifications in these patients affect life expectancy.

Acid buffering

Acid–base status in HD patients is estimated from plasma bicarbonate concentration.

Plasma bicarbonate assessment requires an adequate handling of the plasma sample avoiding air contact and delay in its measurement.

Plasma bicarbonate level is also a reflection of protein intake, explaining a U-shaped curve of the relationship between HD patient survival and the pre-HD plasma bicarbonate level.

Buffer supply to the patient from dialysate with conventional HD treatment or infusion with convective techniques is necessary to correct the acidosis of HD patients.

As long-term exposure to acidosis may alter nutritional status by enhancing protein catabolism, individualization of the buffer prescription is necessary to avoid metabolic acidosis and also post-dialysis alkalosis. The usual dialysate bicarbonate concentration is 35 mmol/l, but it should be adapted to reach a midweek plasma bicarbonate value of 22 mmol/l.

CV stability in convective therapies

The role of convective techniques in CV stability remains unresolved. Convective transport appears to elicit a more physiological response to fluid removal than standard diffusive HD.

Most studies showed that different thermal effects of convective therapies play an important role in this respect.

The other major point to take into account is sodium balance. Sodium removal with convective techniques appears to be less than with standard HD for the same dialysate and infusate sodium concentrations. This raises some doubt as to the intrinsic capacity of HF and HDF to induce better blood pressure stability, independent of sodium or temperature.

At this point, there is no clear evidence in favour of the hypothesis that CV stability with convective techniques is related to a more efficient removal of various vasoactive compounds of medium or large molecular size.

This report comes from the sixth ‘Accord Workshop’, which took place in Malaga in April 2003. The Accord Programme is an independent initiative supported by Membrana GmbH, seeking to bring about European consensus on important treatment and management issues in nephrology and dialysis, to help optimize clinical outcomes for patients (more information can be found at the Accord website: www.membrana.com). Francesco Locatelli, as chairman of the Accord programme, wrote the Introduction and oversaw the consistency of the manuscript edition of the Accord workshop. Contributions to other sections were made by the following participants: Francesco Locatelli: Sodium; Adrian Covic: Potassium; Charles Chazot: Acid buffering; Karel Leunissen: Calcium; José Luño: CV stability with convective therapies; Mohammed Yaqoob: Oral dialysis: is it a realistic goal for future? We would like to thank Marco D’Amico and Robin Wright for their help in editing the manuscript.

Conflict of interest statement. None declared.

The text above is owned by the site above referred.

Here is only a small part of the article, for more please follow the link

Also see:




Leave a Reply

Your email address will not be published. Required fields are marked *