Some file systems such as NTFS support sparse files. Clusters you have never touched will not be written to disk. This way, you could open a file, seek to position 1GB, write a byte, and the file will only take up a few kilobytes of disk space (for the allocated cluster containing that byte and some other metadata). However, chances are that when copying that file the copy will actually be 1GB because a read from an untouched cluster will not fail but simply reads all zeroes, which are then written to the copy.
But why do the contents of the file have to be linear? Can't you just write the added data to the end of the file, and have an in-memory remapping structure so you know where each chunk is located in the file?