Synchronzied scrolling with timestamp
-
Hi,
I’ve got two log files. Each line has a timestamp, both using the same format. Can I configure the synchronized scrolling to sync with the according timestamp? Is there an Addon?
Example of line:
2023/01/19 10:44:58.362 {STRING} -
@Basti-Klein said in Synchronzied scrolling with timestamp:
Can I configure the synchronized scrolling to sync with the according timestamp?
No. Wherever the two views are showing when you select View > Synchronize Vertical Scrolling will become the point at which the two files are synchronized.
Is there an Addon?
There is not a plugin (that I know of) whose primary purpose is synchronizing two files based on timestamp (or based on a regex, or some such). Though depending on your needs, ComparePlus (which has an internal algorithm to try to align partially-matching files) might be able to sync things up the way you want, depending on how the other lines in your data compare. If I were you, that would be my first try.
-
Not at all an answer to your direct question, but in the spirit of possibly “getting your task done”…
You might try a script conversion on your data that changes the leading timestamp on a line to a number of epoch milliseconds. Then for file “A” append an
A
to the time value, for file “B” append aB
. Then combine all the data into one (temporary) file and sort it.So as an example of what it might look like after the sorting might be:
1674125098362A foo 1674125098369B bar
etc.
Then I suppose convert the epoch milliseconds value back into a readable time (leaving the
A
andB
in place and you can browse all data in one file, in order of time.Sounds like a lot of work, but, depending on your need it may be worth it?
-
As a kickstarter down the solution path from my earlier idea, here’s probably the most difficult scripting piece:
from datetime import datetime as dt def custom_replace_func(m): dt_matched_time = dt.strptime(m.group(0), '%Y/%m/%d %H:%M:%S.%f') epoch_diff = dt_matched_time - dt(1970, 1, 1) epoch_secs_as_str = str(int(epoch_diff.total_seconds())) msecs_as_str = str(epoch_diff.microseconds)[0:3] return epoch_secs_as_str + msecs_as_str + 'A' editor.rereplace(r'^\d{4}/\d\d/\d\d \d\d:\d\d:\d\d\.\d{3}', custom_replace_func)
That script will take text data like this in the active file tab:
2023/01/19 10:44:58.362 2023/01/19 10:44:59.365 2023/01/19 10:45:05.998 2023/01/19 10:46:57.038 2023/01/20 09:35:12.267 2023/01/20 09:44:23.334
And convert it to this:
1674125098362A 1674125099365A 1674125105998A 1674125217380A 1674207312267A 1674207863334A
This has its roots in the FAQ about customized replacement via script, see HERE for more details about that type of thing.
-
And, for completeness, to go back the other way:
def custom_replace_func2(m): secs_as_str = m.group(0)[0:10] msecs_as_str = m.group(0)[10:13] secs_as_float = float(secs_as_str) + (float(msecs_as_str) / 1000.0) dt_from_utc_epoch = dt.utcfromtimestamp(secs_as_float) return dt_from_utc_epoch.strftime('%Y/%m/%d %H:%M:%S.%f')[:-3] editor.rereplace(r'^\d{13}(?=A)', custom_replace_func2)
Will take:
1674125098362A 1674125099365A 1674125105998A 1674125217380A 1674207312267A 1674207863334A
and produce:
2023/01/19 10:44:58.362A 2023/01/19 10:44:59.365A 2023/01/19 10:45:05.998A 2023/01/19 10:46:57.038A 2023/01/20 09:35:12.267A 2023/01/20 09:44:23.334A