

It's the data that's in TABLE_B that's the final objective, so disabling the trigger isn't an option! This whole problem came about because I accidentally disabled the trigger for a few days, and the preferred solution to the question 'how to run a trigger on existing rows' seemed to be 'remove the rows and add them back again' - see the original post (link above) for details. Maybe I should have emphasized this earlier, every insert into TABLE_A fires a trigger that adds record to TABLE_B. I think the issue is likely to be size of the transaction, under normal circumstances records are inserted into in blocks of 100 records per INSERT, the statement shown above attempts to add 10 million records in a single transaction, my guess is that this is the problem, but I've no way of knowing if it really is, or if there's a suitable work around (or if the export/import method I've proposed would be any quicker) Under normal circumstances records are INPUTed into TABLE_A at a rate of about 1000/sec (including trigger time). I don't think there is any significant problems with the trigger. (this is follow-on question from this, if any background details are required) Is the export/import method likely to be any quicker – I’d like some guidance on this before I start on another job that may take days to run, and may not even work any better! The obvious answer of "just try it and see" isn't really an option, I can't afford more downtime! COPY TABLE_A(field1,field2) FROM 'd:\tmp\dump\table_a.dump' DELIMITER ',' CSV COPY TABLE_A_20180807_BCK (field1,field2) to 'd:\tmp\dump\table_a.dump' DELIMITER ',' CSV Instead I was wondering about the following….Įxport to a CSV file…. I’m not sure a pg_dump would actually work for me because I’m only interested in couple of fields from TABLE_A, not the whole lot. Other issues that probably affect performance are 1) TABLE_A has a field based on an auto-generated sequence, 2) TABLE_A has an AFTER INSERT trigger on it that parses each new record and adds a second record to TABLE_BĪ number of other threads have suggested doing a pg_dump of TABLE_A_20180807_BCK and then load the data back into TABLE_A. I have attempted a similar load with a table holding 12 million rows and it failed to complete in 48 hours so I had to cancel it. The problem is that I have several other tables that need the same process applied and they’re all a lot larger (the biggest is 20 million rows). This worked (eventually) for a table with about 10 million rows in it (took 24 hours). SELECT field1, field2 FROM TABLE_A_20180807_BCK I need to transfer a large amount of data (several million rows) from one table to another.
