This is a preview of subscription content, log in to check access. Postgrad Med J , Google Scholar. Dial Transplant 6: 9, Google Scholar. Lancet 2 : , PubMed Google Scholar. Buselmeier T, Kjellstrand C: Osmolality changes during hemodialysis: natural history, clinical correlations, and influence of dialysate glucose and intravenous mannitol. Nephron , PubMed Google Scholar. Henderson LW: Pre vs. The development of ultra-short dialysis: preliminary results.
Funck-Brentano JL: Ultrafiltration and diffusion techniques with the polyacrylonitrile membrane dialyzer. Nephron 57, PubMed Google Scholar. Nephron 86, PubMed Google Scholar. Dadson IP: Conductivity and its relationship to dialysate composition. Wilkinson R, Barber SG, Robson V: Cramps, thirst and hypertension in hemodialysis patients - the influence of dialysate sodium concentration. Dial Transplant 7: , Google Scholar. Dial Transplant 8: , Google Scholar. The electrolyte content in erythrocytes in the long-term dialysis programme during use of different irrigating solutions.
Nephron 1 : , Google Scholar. Lancet 1: , PubMed Google Scholar. Wing AJ: Optimum calcium concentration of dialysis fluid for maintenance haemodialysis. Ritz E: Azotemic osteodystrophy - indications for intervention. Johnson WJ: Persistent severe hypercalcemia during maintenance hemodialysis. Circ Res Suppl I. Evaluation of parathyroid suppressibility.
Bouillon R, Verberckmoes R, de Moor P: Influence of dialysate calcium concentration and vitamin D on serum parathyroid hormone during repetitive dialysis. Carney SL, Gillies AHG: Effect of an optimum dialysis fluid calcium concentration on calcium mass transfer during maintenance hemodialysis. Brautbar N, Kleeman CR: Disordered divalent ion metabolism in kidney disease: comments on pathogenesis and treatment. Adv Nephrol 8: , Google Scholar. Gonella M: Plasma and tissue levels of magnesium in chronically hemodialyzed patients: effects of dialysate magnesium levels.
Heidland A, Wetzeis E: Discussion: magnesium metabolism. These drugs made it possible to control more efficiently the high burden of cardiovascular CV disease that affects ESRD patients and to further improve their quality of life. As a result of the exciting advances in the care of dialysis patients and in the practice of renal transplantation, we have observed a dramatic increase in the number of ESRD patients referred to renal substitutive treatments over the last decade.
Therefore, the present challenge of dialysis therapy is to provide treatments with the highest possible bio-compatibility and tailored to individual clinical conditions. In other words, modern dialysis therapy should be personalized to meet the requirements of a hypotension-prone population with poor compliance in terms of water and salt restriction in the inter-HD interval.
This report discusses the most relevant and present issues related to adequate exchanges between uraemic blood and a liquid of known composition, i. A critical appraisal of convective techniques, often advocated as most efficient with respect to CV stability, is provided. Lastly, the Accord Group provides its consensus on important key points. It is well known that intra-HD sodium removal has very important clinical implications. Changes in the body sodium pool play a pivotal role in the genesis of intra-HD CV instability, on the one hand, and in inter-HD over-hydration and hypertension, on the other [ 2 ].
Despite technological advances in dialysis equipment and modalities, CV instability during HD treatment is still an important clinical problem for several reasons including the progressive ageing of the dialysis population, the increased burden of patients with diabetes and CV co-morbidity, and the tendency to shorten dialysis treatment time by increasing blood flow, dialyser surface area and ultrafiltration rates.
Hypertension is epidemic in HD centres. Despite improved pharmacotherapy, as many as one-third to one half of HD patients are hypertensive. To further complicate the picture, given the daily fluctuations in individual dietary sodium and water intake, the dialysis prescription should be different not only amongst patients, but also within individual patients from one dialysis session to another.
In the dialysis prescription there are different parameters that can be modified: amongst these, the sodium concentration in the dialysate plays a pivotal role. Sodium crosses the dialysis membrane by diffusion and convection. It is well known that the sodium fractions transported by these two mechanisms are not the same [ 3 ], and this is important to keep in mind in order to correctly define intra-HD sodium kinetics and to choose the proper dialysate sodium concentration.
In the past, some difficulties arose from the use of different laboratory instruments used to measure the blood and dialysate sodium concentrations flame photometry or ionometry. Nowadays, dialysis departments generally use direct ionometry, which measures the ionized plasma water sodium concentration in the blood and ionized sodium concentration in the dialysate.
It is acceptable for clinical purposes to assume that the plasma water sodium concentration directly measured by ionometry corresponds to the ultrafiltrable fraction, i. Diffusive sodium transport depends on the difference between the sodium concentration in blood and dialysis fluid. Because the dialysate is protein-free, the ionized sodium concentration, directly measured by ionometry, can be considered the concentration suitable to calculate the trans-membrane sodium diffusion gradient from the dialysate site.
Given the Donnan effect, mainly due to the negative charge of plasma proteins, the blood sodium concentration measured by direct ionometry should be corrected for a Donnan factor of 0. When dialysate sodium activity corresponds to plasma water sodium activity multiplied by 0. Hyponatric dialysate can theoretically be used if the patient should lose sodium by diffusion. This was usual clinical practice in the past. We feel that this practice should be avoided at present.
Hypernatric dialysate is more frequently used in order to avoid excessive sodium losses due to ultrafiltration and prevent CV instability. In fact, when the sodium concentration in the dialysate is higher than the patient pre-HD blood sodium concentration, the patient is supplied with sodium via diffusion for as long as necessary to equalize the difference in concentrations.
In this case, the diffusive sodium transport to the patient counteracts the convective sodium removal due to ultrafiltration. Moreover, it can trigger an intensive sense of thirst, causing high water intake in the inter-HD interval, which has to be treated with high ultrafiltration rates during HD which, in turn, favours hypotensive episodes.
Hence, the goal of dialysis is to remove the exact quantity of sodium that has accumulated in the inter-dialysis period in order to reach a zero balance. According to the single pool sodium kinetic model, zero sodium balance between the intra-HD sodium removal and the inter-HD sodium accumulation can be achieved by individualizing the dialysate sodium concentration for each dialysis, to reach a constant end-dialysis plasma water sodium concentration and applying a rate of ultrafiltration equal to the interdialytic increase in body weight [ 4 ].
Unfortunately, this model is not suitable for routine use because of the need for pre-dialysis plasma water sodium concentration at each dialysis session. Consequently, the most frequent therapeutic measure against intra-HD hypotensive complications is still the hypernatric dialysate application. The downside of this is the risk that the patient leaves the dialysis treatment with hypernatraemia, having to take in fluid to compensate the sensation of thirst, i.
On the other hand, sodium therapy is rapidly effective and is extremely favourable from a cost effective point of view. During the past several years many authors have proposed the use of sodium profiles in order to improve intra-HD CV instability and minimize the possible complications of high sodium dialysate.
The rationale is that: i by increasing the conductivity, i. Frequently, sodium profiles combine a variation of dialysate sodium concentration during HD sessions with a variation of ultrafiltration rate. Generally, a higher ultrafiltration rate is combined with a higher dialysate sodium concentration in the first phase of dialysis, which is reversed in the final phase [ 7 ].
The introduction of continuous blood volume monitoring on-line during HD has offered the opportunity of changing both the ultrafiltration rate and the dialysate sodium concentration, in order to maintain a constant blood volume reduction rate, considering hypovolaemia as the major cause of intra-HD hypotension [ 8 , 9 ].
When sodium profiling was compared with standard HD treatments, the results generally showed an improvement in CV stability. However, this was frequently associated with higher post-HD plasma water sodium concentration, higher pre-dialysis body weight and higher blood pressure values as a result of inadequate sodium removal [ 6 , 7 ].
In the study by Oliver et al. This implied a marked difference in sodium removal, as also expressed by a higher post-HD serum sodium concentration and pre-HD body weight observed in the profiled treatment group. Many of the studies that evaluated the effect of different sodium profiles on CV instability did not take into account the sodium balance, as also evidenced by the fact that in many studies there was insufficient information to calculate sodium removal, including the methods used to determine sodium concentration.
In other words, whilst sodium profiling may be associated with improved intra-HD CV stability, the pathophysiological basis remains ill understood. Sodium profiling has not solved the issue of sodium balance. In , Gotch et al. However, on the basis of Gotch's theoretical premises, Di Filippo et al. Unfortunately, the model is unsuitable for routine clinical use because of the need for blood sampling and laboratory determinations at each dialysis session. Given the linear correlation between the conductivity of dialysate and its sodium content, the conductivity values can be used instead of sodium concentration values.
According to the basic theory developed by Polaschegg [ 11 ] and Petitclerc et al. The sodium kinetic model can therefore be changed into the conductivity kinetic model, allowing the prediction of final plasma water conductivity, when the dialysate conductivity is known, and of the dialysate conductivity required to obtain a desired final plasma water conductivity.
There are modules in some HD machines that are capable of determining ionic dialysance and plasma water conductivity on-line, allowing routine application of the conductivity kinetic model. Paired filtration dialysis PFD is a special case of haemodiafiltration HDF in which convection and diffusion take place separately, allowing on-line monitoring of ultrafiltrate conductivity. We would like to stress that the improvement in CV stability was obtained by reducing the variability of end-dialysis ultrafiltrate conductivity, mainly related to day-to-day variations in sodium intake.
In fact, these results were obtained by simply modifying the dialysate conductivity in order to reach an ultrafiltrate conductivity at the end of each session that was equal to the mean value determined in the same patient during the run-in period. This value was not different between the two treatment arms, but the variability of end-dialysis ultrafiltrate conductivity was lower during the experimental treatment than during the conventional treatment.
In the future, it will probably be possible, due to the progressive development of automatic systems, to combine the benefits of sodium profiling, minimizing the complications of hypernatric dialysis, with the benefits of the conductivity kinetic model, providing an adequate sodium removal.
In order to be adequate, potassium removal during dialysis should be equal to the amount accumulated during the inter-dialytic period. However, as it is very difficult to quantify both inter-dialytic loading and intra-dialytic removal in clinical practice, potassium removal by dialysis is termed satisfactory if predialysis hyperkalaemia is avoided.
On the other hand, the safety of dialysate potassium concentration is also related to the avoidance of hypokalaemia and dialysis-induced arrhythmias.
Post-HD serum potassium concentration is influenced not only by the pre-HD serum potassium concentration and the dialysate potassium concentration, but also by plasma tonicity and changes in plasma tonicity following HD. For an excellent discussion on various aspects of changes in water and electrolyte homeostasis in dialysis, the reader is referred to a recent review by Redaelli [ 16 ].
Current dialysis practice is rather uniform and does not take into account pre-HD serum electrolyte especially potassium levels. However, a study by Sforzini et al. Therefore, it has been questioned whether the occurrence of complex ventricular arrhythmia is influenced by dialysate potassium concentration.
One investigation in 74 long-term HD patients mean age: Another study performed in 78 HD patients claimed an impact of low potassium concentration in the dialysate [ 20 ]. Thus, this issue is still controversial. From this perspective, it is important to investigate the influence of serum potassium and of its changes on QT-segment duration and dispersion, and on the occurrence of late ventricular potentials.
QT dispersion maximum minus minimum QT interval on standard lead electrocardiogram is a marker of ventricular repolarisation variability. QT dispersion is also observed in individuals with hypertrophic cardiomyopathy and episodes of ventricular tachyarrhythmia. In ESRD, which is well known to be associated with an increased risk of malignant arrhythmias, there are relatively few data on the impact of HD sessions on QT dispersion. Cupisti et al. They found no correlation between the degree of intra-HD serum potassium concentration changes and the increase in QT dispersion.
A study by Covic et al. Patients in whom the dialysis session led to an increase in QT interval started HD with significantly lower serum potassium and higher serum ionized calcium concentrations, and displayed a greater reduction in serum calcium concentration following HD. Thus, it appears that the main predictor of changes in QT interval, across a single dialysis session, is potassium-related, i.
Although the above presented data would suggest that intra-HD serum potassium changes do not affect QT dispersion, these studies were not aimed at elucidating this relationship. Indeed, Cupisti et al. Most importantly, this increase in QT dispersion was totally blunted during a standard HD with no plasma potassium decrease potassium dialysate profiling and reappeared when the dialysate potassium concentration was restored to 2.
In summary, the pre-dialysis serum potassium concentration may play a role in certain subsets of patients. Therefore, intra-HD serum potassium exchanges are potentially relevant when they induce a certain decrease in serum potassium concentration with each individual having a different arrythmogenic level. Signal averaged electrocardiography SAECG has been developed in recent years with the purpose of detecting ventricular late potentials non-invasively, to identify patients at risk of sudden death or ventricular tachycardia.
Ichikawa et al. The opposite was true for the high pre-HD serum potassium concentration group. An insufficient decrement of serum potassium concentration by HD is suggestive of an arrythmogenic effect in patients with high pre-HD serum potassium concentrations [ 24 ]. Redaelli and colleagues [ 25 ] performed a prospective, randomized, cross-over study, specifically designed to clarify whether a new mode of HD potassium removal, using a decreasing intra-HD dialysate potassium concentration and a constant plasma-dialysate potassium gradient throughout the HD session, is capable of reducing the arrhythmogenic effect of standard HD.
This was compared with standard HD sessions i. In the potassium-profiled dialysis treatment, the initial dialysate potassium concentration had to be by 1.
Although the initial plasma-dialysate potassium gradient was 2. Thus, potassium-profiling of the dialysate may be an efficient tool to decrease the arrythmogenic effect of HD. Low serum potassium is a risk factor for CV diseases, including hypertension in the non-renal general population. There are several potential reasons to incriminate dialysate potassium concentration in blood pressure reduction during HD sessions: i hypokalaemia may exacerbate autonomic dysfunction; ii hypokalaemia alters cardiac inotropism; and iii intra-HD potassium loss accounts for a modest decrease in total osmoles.
Direct evidence tended rather to negate the impact of dialysate potassium on blood pressure [ 26 ]. Blood pressure decreased during the HD session as excess fluid was removed, regardless of the dialysate potassium concentration. Calcium mass balance studies showed that in a patient with a normal serum calcium before HD, dialysate calcium concentrations of 1.
Calcium ions play a pivotal role in the contractile process of both vascular smooth muscle cells and cardiac myocytes. However, because of the frequent use of calcium salts as phosphate binders, it is often advised to use low dialysate calcium concentrations to prevent or treat hypercalcaemia [ 28 , 29 ].
In general, a calcium concentration of 1. Several studies in dialysis patients have shown that changes in serum ionized calcium levels have significant haemodynamic effects [ 30 , 31 ].
The effect of dialysate calcium concentration on blood pressure seems to be mediated predominantly through changes in myocardial contractility, although several investigators also found a change in vascular reactivity [ 31 — 33 ].
A recent study reported a significantly larger decline in blood pressure during dialysis with low dialysate calcium concentration, compared with high dialysate calcium concentration, in patients with normal cardiac function, which was mainly related to decreased left ventricular contractility with the use of low calcium dialysate [ 34 ]. In another study in cardiac-compromised patients, the effect of, respectively, low calcium and high calcium dialysate on systolic blood pressure course was compared [ 35 ].
In contrast, during high calcium dialysis in the same patients, mean arterial pressure remained stable. This effect was mediated by direct changes in myocardial contractility, whereas systemic vascular resistance remained unchanged. According to these results, it is conceivable to suggest the use of a 1.
Calcification in these patients will probably not affect life expectancy immediately, whereas by preventing intra-HD hypotension, a high dialysate calcium concentration of 1. Moreover, the advent of calcimimetics may allow in the future the renewed use of higher dialysate calcium concentrations than at present. Recently, comparing total CO 2 assessment with the same method but at a different time and place, it has been pointed out that delayed sample handling may jeopardize the results and the interpretation of the acid—base status of dialysis patients [ 37 ].
The blood sample for bicarbonate as total CO 2 assessment should be quickly treated, protected from air contact, and assessed in the same laboratory. A relationship between protein intake and plasma bicarbonate concentration, and between nutritional markers and plasma bicarbonate concentration has been reported recently [ 38 , 39 ].
At least on a short-term basis, paradoxically a decrease in plasma bicarbonate level is associated with a more favourable constellation of nutritional markers because of adequate protein intake. Diffusive influx of buffer into the patient has been used since the beginning of the dialysis era. Because of precipitation with calcium and magnesium and because of the risk of bacterial contamination, bicarbonate was rapidly abandoned and replaced by acetate during the first two decades of dialysis therapy.
The key advantages were its equimolar conversion to bicarbonate, a bacteriostatic effect and low cost. However, acetate induced side-effects have been reported in a large number of studies during the s, when high-efficiency HD was progressively used, due to the limitations in hepatic acetate metabolism in at least part of the patients.
This problem has been explored by Vinay et al. Even though a number of comparative studies of the use of acetate and bicarbonate as the buffer for HD treatment have led to controversial conclusions, untoward effects of acetate have been frequently reported, such as hypoxaemia, vasodilatation, depressed left ventricular function enhancing the risk for hypotension episodes during dialysis treatment and impaired lipid and ketone acid metabolism. Bicarbonate is currently the buffer used regularly in HD procedures.
The bicarbonate flux from the dialysate to the patient is determined by the trans-membrane concentration gradient and by bicarbonate dialysance. In the recent French cross-sectional study quoted above [ 38 ], midweek pre-HD plasma bicarbonate averaged The fate of plasma bicarbonate during the inter-HD period has been studied by Graham et al.
The time-averaged concentration of plasma bicarbonate was calculated at These authors also compared the pre-HD value of plasma bicarbonate in 46 patients after a 2 and a 3 day interval, respectively. The pre-dialysis plasma levels were significantly lower with the 3 day interval This manoeuvre improved protein turnover [ 44 ] and triceps skin fold thickness [ 45 ], and increased serum branched-chain amino acids [ 46 ], but had no effect on serum albumin and total lymphocyte count [ 47 ].
At the end of the s, convective techniques were developed for the treatment of dialysis patients, with the goal of improving session tolerance and larger molecule clearance.
The net bicarbonate flux into the patient is the result of efflux from diffusion no buffer in the dialysate and high-flow convection, and influx from infusion.
In a multi-centre trial, Albertazzi et al. Santoro et al. These factors included the amount of infused sodium bicarbonate, the pre-HD plasma bicarbonate level and body weight positive relationships , as well as the dialysis time and blood flow negative relationships.
A mathematical model was derived from this study, indicating the amount of sodium bicarbonate to be infused during the dialysis session to reach a given value of post-HD plasma bicarbonate level. In a cross-over multi-centre study, it has been reported that AFB provided a better acid—base control in diabetic patients, when compared with standard bicarbonate dialysis [ 50 ]. Movilli et al. On-line HDF uses a sterile bicarbonate-buffered dialysate as infusion solution to compensate for the high convective flow with this technique.
The net bicarbonate flux into the patient reflects the difference between high bicarbonate efflux from convective flow and bicarbonate influx from the dialysate and substitution fluid infused in pre- or post-dilution mode. The influence of the infusion site pre- or post-dilution, or mixed on acid—base status have led to controversial results. In conclusion, acid—base status in HD patients is estimated from plasma bicarbonate. It reflects not only protein catabolism, but also protein intake.
As long-term exposure to acidosis may alter nutritional status by enhancing protein turn-over, individualization of buffer prescription is desirable in order to avoid metabolic acidosis and post-HD alkalosis. Convective therapies, based on pressure-induced water and solute flow across high-flux membranes, appear to be associated with better CV tolerance in response to sodium and water removal than conventional HD, which is mainly based on diffusive transport [ 52 ]. Conventional haemofiltration HF or HDF require the infusion of large amounts of sterile and pyrogen-free replacement fluid into the bloodstream to compensate for the high ultrafiltration rates necessary to obtain efficient water and solute removal.
To counterbalance the risk of back-filtration, ultrapure dialysis fluid is a necessary requirement for HD with high-flux membranes and also for on-line treatments.
With respect to solute transport, diffusion is particularly efficient for clearing low molecular weight, non protein-bound solutes, whilst convective transport is more efficient for middle and high molecular weight molecules. The pathophysiological mechanisms that influence CV reactivity in convective therapies are elegantly reviewed by Santoro et al. Apart from the use of biocompatible high-flux membranes, which do not activate complement, remove high amounts of vasodilating substances and improve sympathetic and baroreceptor function, the two most beneficial aspects of convective therapies in comparison with conventional HD are lower core body temperature and less negative sodium balance, depending on the concentration of sodium in the substitution fluid that ideally preserves blood volume in conjunction with the maintenance of plasma osmolality.
The use of replacement fluid is what defines haemofiltration as a distinct modality from SCUF. The difference between ultrafiltration rate and replacement fluid rate is the total fluid removal rate. ADQI define replacement fluid as follows:. Replacement fluid may be given as predilution or postdilution. In the design of dialysate solutions, only a few basic characteristics are expected of the product:.
These matters make it seem as if there is a significant amount of engineering required in the design of these fluids. That is not the case. For example, when Kramer et al were performing pioneering CAVH to remove fluid from heart failure patients, they used unmodified Ringer's lactate as replacement fluid, straight off the shelf. In these early years and more recently in resource-poor environments people had used peritoneal dialysis fluid as dialysate as a source of large and readily available volumes of sterile fluid.
This is obviously not idea, as the high concentration of dextrose in such makeshift dialysate produces haemodynamic and metabolic disturbances Celik et al, Both phosphate and potassium are usually elevated in renal failure, which is why these fluids usually do not contain any potassium or phosphate. Potassium is conventionally added to the bags of dialysate say, 20mmol per every 5L bag.
Phosphate is usually replaced in some way while the patient is on dialysis. Buffer is the alkaline component of dialysis fluids. The role and influence of buffers in dialysis is significantly dependent on the technique of dialysis which is used.
The following diagram illustrates how this works. There being no THAM in dialysate solutions available on the market currently, the choice of buffers in dialysate is limited to:. Acetate seemed like a good choice for an easily metabolised buffer base.
Unfortunately, about mmol of buffer is required. At such doses, acetate acts as a vasodilator. Bradley et al were able to demonstrate that the MAP of patients being dialysed with acetate drops more than with other buffers.
0コメント