Commit Graph

12 Commits

Author SHA1 Message Date
Viktor Malik 6025793513
bpf: omit default off=0 and imm=0 in register state log
JIRA: https://issues.redhat.com/browse/RHEL-23644

commit 1db747d75b1dbe17bf4283ed87bd3b7a92010f34
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Fri Nov 17 19:46:21 2023 -0800

    bpf: omit default off=0 and imm=0 in register state log
    
    Simplify BPF verifier log further by omitting default (and frequently
    irrelevant) off=0 and imm=0 parts for non-SCALAR_VALUE registers. As can
    be seen from fixed tests, this is often a visual noise for PTR_TO_CTX
    register and even for PTR_TO_PACKET registers.
    
    Omitting default values follows the rest of register state logic: we
    omit default values to keep verifier log succinct and to highlight
    interesting state that deviates from default one. E.g., we do the same
    for var_off, when it's unknown, which gives no additional information.
    
    Acked-by: Eduard Zingerman <eddyz87@gmail.com>
    Acked-by: Stanislav Fomichev <sdf@google.com>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/r/20231118034623.3320920-7-andrii@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Viktor Malik <vmalik@redhat.com>
2024-06-25 10:51:52 +02:00
Artem Savkov 79f00482a2 selftests/bpf: Make align selftests more robust
JIRA: https://issues.redhat.com/browse/RHEL-23643

commit cde785142885e1fc62a9ae92e7aae90285ed3d79
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Oct 11 15:37:26 2023 -0700

    selftests/bpf: Make align selftests more robust
    
    Align subtest is very specific and finicky about expected verifier log
    output and format. This is often completely unnecessary as in a bunch of
    situations test actually cares about var_off part of register state. But
    given how exact it is right now, any tiny verifier log changes can lead
    to align tests failures, requiring constant adjustment.
    
    This patch tries to make this a bit more robust by making logic first
    search for specified register and then allowing to match only portion of
    register state, not everything exactly. This will come handly with
    follow up changes to SCALAR register output disambiguation.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: John Fastabend <john.fastabend@gmail.com>
    Acked-by: Eduard Zingerman <eddyz87@gmail.com>
    Link: https://lore.kernel.org/bpf/20231011223728.3188086-4-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2024-03-27 10:27:53 +01:00
Artem Savkov 1bbfb081eb bpf: Fix __reg_bound_offset 64->32 var_off subreg propagation
Bugzilla: https://bugzilla.redhat.com/2221599

commit 7be14c1c9030f73cc18b4ff23b78a0a081f16188
Author: Daniel Borkmann <daniel@iogearbox.net>
Date:   Wed Mar 22 22:30:55 2023 +0100

    bpf: Fix __reg_bound_offset 64->32 var_off subreg propagation
    
    Xu reports that after commit 3f50f132d8 ("bpf: Verifier, do explicit ALU32
    bounds tracking"), the following BPF program is rejected by the verifier:
    
       0: (61) r2 = *(u32 *)(r1 +0)          ; R2_w=pkt(off=0,r=0,imm=0)
       1: (61) r3 = *(u32 *)(r1 +4)          ; R3_w=pkt_end(off=0,imm=0)
       2: (bf) r1 = r2
       3: (07) r1 += 1
       4: (2d) if r1 > r3 goto pc+8
       5: (71) r1 = *(u8 *)(r2 +0)           ; R1_w=scalar(umax=255,var_off=(0x0; 0xff))
       6: (18) r0 = 0x7fffffffffffff10
       8: (0f) r1 += r0                      ; R1_w=scalar(umin=0x7fffffffffffff10,umax=0x800000000000000f)
       9: (18) r0 = 0x8000000000000000
      11: (07) r0 += 1
      12: (ad) if r0 < r1 goto pc-2
      13: (b7) r0 = 0
      14: (95) exit
    
    And the verifier log says:
    
      func#0 @0
      0: R1=ctx(off=0,imm=0) R10=fp0
      0: (61) r2 = *(u32 *)(r1 +0)          ; R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=0,imm=0)
      1: (61) r3 = *(u32 *)(r1 +4)          ; R1=ctx(off=0,imm=0) R3_w=pkt_end(off=0,imm=0)
      2: (bf) r1 = r2                       ; R1_w=pkt(off=0,r=0,imm=0) R2_w=pkt(off=0,r=0,imm=0)
      3: (07) r1 += 1                       ; R1_w=pkt(off=1,r=0,imm=0)
      4: (2d) if r1 > r3 goto pc+8          ; R1_w=pkt(off=1,r=1,imm=0) R3_w=pkt_end(off=0,imm=0)
      5: (71) r1 = *(u8 *)(r2 +0)           ; R1_w=scalar(umax=255,var_off=(0x0; 0xff)) R2_w=pkt(off=0,r=1,imm=0)
      6: (18) r0 = 0x7fffffffffffff10       ; R0_w=9223372036854775568
      8: (0f) r1 += r0                      ; R0_w=9223372036854775568 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775823,s32_min=-240,s32_max=15)
      9: (18) r0 = 0x8000000000000000       ; R0_w=-9223372036854775808
      11: (07) r0 += 1                      ; R0_w=-9223372036854775807
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775809)
      13: (b7) r0 = 0                       ; R0_w=0
      14: (95) exit
    
      from 12 to 11: R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775806
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775806 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775810,var_off=(0x8000000000000000; 0xffffffff))
      13: safe
    
      [...]
    
      from 12 to 11: R0_w=-9223372036854775795 R1=scalar(umin=9223372036854775822,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775794
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775794 R1=scalar(umin=9223372036854775822,umax=9223372036854775822,var_off=(0x8000000000000000; 0xffffffff))
      13: safe
    
      from 12 to 11: R0_w=-9223372036854775794 R1=scalar(umin=9223372036854775823,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775793
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775793 R1=scalar(umin=9223372036854775823,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff))
      13: safe
    
      from 12 to 11: R0_w=-9223372036854775793 R1=scalar(umin=9223372036854775824,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff)) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775792
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775792 R1=scalar(umin=9223372036854775824,umax=9223372036854775823,var_off=(0x8000000000000000; 0xffffffff))
      13: safe
    
      [...]
    
    The 64bit umin=9223372036854775810 bound continuously bumps by +1 while
    umax=9223372036854775823 stays as-is until the verifier complexity limit
    is reached and the program gets finally rejected. During this simulation,
    the umin also eventually surpasses umax. Looking at the first 'from 12
    to 11' output line from the loop, R1 has the following state:
    
      R1_w=scalar(umin=0x8000000000000002 (9223372036854775810),
                  umax=0x800000000000000f (9223372036854775823),
              var_off=(0x8000000000000000;
                               0xffffffff))
    
    The var_off has technically not an inconsistent state but it's very
    imprecise and far off surpassing 64bit umax bounds whereas the expected
    output with refined known bits in var_off should have been like:
    
      R1_w=scalar(umin=0x8000000000000002 (9223372036854775810),
                  umax=0x800000000000000f (9223372036854775823),
              var_off=(0x8000000000000000;
                                      0xf))
    
    In the above log, var_off stays as var_off=(0x8000000000000000; 0xffffffff)
    and does not converge into a narrower mask where more bits become known,
    eventually transforming R1 into a constant upon umin=9223372036854775823,
    umax=9223372036854775823 case where the verifier would have terminated and
    let the program pass.
    
    The __reg_combine_64_into_32() marks the subregister unknown and propagates
    64bit {s,u}min/{s,u}max bounds to their 32bit equivalents iff they are within
    the 32bit universe. The question came up whether __reg_combine_64_into_32()
    should special case the situation that when 64bit {s,u}min bounds have
    the same value as 64bit {s,u}max bounds to then assign the latter as
    well to the 32bit reg->{s,u}32_{min,max}_value. As can be seen from the
    above example however, that is just /one/ special case and not a /generic/
    solution given above example would still not be addressed this way and
    remain at an imprecise var_off=(0x8000000000000000; 0xffffffff).
    
    The improvement is needed in __reg_bound_offset() to refine var32_off with
    the updated var64_off instead of the prior reg->var_off. The reg_bounds_sync()
    code first refines information about the register's min/max bounds via
    __update_reg_bounds() from the current var_off, then in __reg_deduce_bounds()
    from sign bit and with the potentially learned bits from bounds it'll
    update the var_off tnum in __reg_bound_offset(). For example, intersecting
    with the old var_off might have improved bounds slightly, e.g. if umax
    was 0x7f...f and var_off was (0; 0xf...fc), then new var_off will then
    result in (0; 0x7f...fc). The intersected var64_off holds then the
    universe which is a superset of var32_off. The point for the latter is
    not to broaden, but to further refine known bits based on the intersection
    of var_off with 32 bit bounds, so that we later construct the final var_off
    from upper and lower 32 bits. The final __update_reg_bounds() can then
    potentially still slightly refine bounds if more bits became known from the
    new var_off.
    
    After the improvement, we can see R1 converging successively:
    
      func#0 @0
      0: R1=ctx(off=0,imm=0) R10=fp0
      0: (61) r2 = *(u32 *)(r1 +0)          ; R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=0,imm=0)
      1: (61) r3 = *(u32 *)(r1 +4)          ; R1=ctx(off=0,imm=0) R3_w=pkt_end(off=0,imm=0)
      2: (bf) r1 = r2                       ; R1_w=pkt(off=0,r=0,imm=0) R2_w=pkt(off=0,r=0,imm=0)
      3: (07) r1 += 1                       ; R1_w=pkt(off=1,r=0,imm=0)
      4: (2d) if r1 > r3 goto pc+8          ; R1_w=pkt(off=1,r=1,imm=0) R3_w=pkt_end(off=0,imm=0)
      5: (71) r1 = *(u8 *)(r2 +0)           ; R1_w=scalar(umax=255,var_off=(0x0; 0xff)) R2_w=pkt(off=0,r=1,imm=0)
      6: (18) r0 = 0x7fffffffffffff10       ; R0_w=9223372036854775568
      8: (0f) r1 += r0                      ; R0_w=9223372036854775568 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775823,s32_min=-240,s32_max=15)
      9: (18) r0 = 0x8000000000000000       ; R0_w=-9223372036854775808
      11: (07) r0 += 1                      ; R0_w=-9223372036854775807
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775568,umax=9223372036854775809)
      13: (b7) r0 = 0                       ; R0_w=0
      14: (95) exit
    
      from 12 to 11: R0_w=-9223372036854775807 R1_w=scalar(umin=9223372036854775810,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775806
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775806 R1_w=-9223372036854775806
      13: safe
    
      from 12 to 11: R0_w=-9223372036854775806 R1_w=scalar(umin=9223372036854775811,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775805
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775805 R1_w=-9223372036854775805
      13: safe
    
      [...]
    
      from 12 to 11: R0_w=-9223372036854775798 R1=scalar(umin=9223372036854775819,umax=9223372036854775823,var_off=(0x8000000000000008; 0x7),s32_min=8,s32_max=15,u32_min=8,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775797
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775797 R1=-9223372036854775797
      13: safe
    
      from 12 to 11: R0_w=-9223372036854775797 R1=scalar(umin=9223372036854775820,umax=9223372036854775823,var_off=(0x800000000000000c; 0x3),s32_min=12,s32_max=15,u32_min=12,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775796
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775796 R1=-9223372036854775796
      13: safe
    
      from 12 to 11: R0_w=-9223372036854775796 R1=scalar(umin=9223372036854775821,umax=9223372036854775823,var_off=(0x800000000000000c; 0x3),s32_min=12,s32_max=15,u32_min=12,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775795
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775795 R1=-9223372036854775795
      13: safe
    
      from 12 to 11: R0_w=-9223372036854775795 R1=scalar(umin=9223372036854775822,umax=9223372036854775823,var_off=(0x800000000000000e; 0x1),s32_min=14,s32_max=15,u32_min=14,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775794
      12: (ad) if r0 < r1 goto pc-2         ; R0_w=-9223372036854775794 R1=-9223372036854775794
      13: safe
    
      from 12 to 11: R0_w=-9223372036854775794 R1=-9223372036854775793 R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      11: (07) r0 += 1                      ; R0_w=-9223372036854775793
      12: (ad) if r0 < r1 goto pc-2
      last_idx 12 first_idx 12
      parent didn't have regs=1 stack=0 marks: R0_rw=P-9223372036854775801 R1_r=scalar(umin=9223372036854775815,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      last_idx 11 first_idx 11
      regs=1 stack=0 before 11: (07) r0 += 1
      parent didn't have regs=1 stack=0 marks: R0_rw=P-9223372036854775805 R1_rw=scalar(umin=9223372036854775812,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
      last_idx 12 first_idx 0
      regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2
      regs=1 stack=0 before 11: (07) r0 += 1
      regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2
      regs=1 stack=0 before 11: (07) r0 += 1
      regs=1 stack=0 before 12: (ad) if r0 < r1 goto pc-2
      regs=1 stack=0 before 11: (07) r0 += 1
      regs=1 stack=0 before 9: (18) r0 = 0x8000000000000000
      last_idx 12 first_idx 12
      parent didn't have regs=2 stack=0 marks: R0_rw=P-9223372036854775801 R1_r=Pscalar(umin=9223372036854775815,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2=pkt(off=0,r=1,imm=0) R3=pkt_end(off=0,imm=0) R10=fp0
      last_idx 11 first_idx 11
      regs=2 stack=0 before 11: (07) r0 += 1
      parent didn't have regs=2 stack=0 marks: R0_rw=P-9223372036854775805 R1_rw=Pscalar(umin=9223372036854775812,umax=9223372036854775823,var_off=(0x8000000000000000; 0xf),s32_min=0,s32_max=15,u32_max=15) R2_w=pkt(off=0,r=1,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
      last_idx 12 first_idx 0
      regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2
      regs=2 stack=0 before 11: (07) r0 += 1
      regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2
      regs=2 stack=0 before 11: (07) r0 += 1
      regs=2 stack=0 before 12: (ad) if r0 < r1 goto pc-2
      regs=2 stack=0 before 11: (07) r0 += 1
      regs=2 stack=0 before 9: (18) r0 = 0x8000000000000000
      regs=2 stack=0 before 8: (0f) r1 += r0
      regs=3 stack=0 before 6: (18) r0 = 0x7fffffffffffff10
      regs=2 stack=0 before 5: (71) r1 = *(u8 *)(r2 +0)
      13: safe
    
      from 4 to 13: safe
      verification time 322 usec
      stack depth 0
      processed 56 insns (limit 1000000) max_states_per_insn 1 total_states 3 peak_states 3 mark_read 1
    
    This also fixes up a test case along with this improvement where we match
    on the verifier log. The updated log now has a refined var_off, too.
    
    Fixes: 3f50f132d8 ("bpf: Verifier, do explicit ALU32 bounds tracking")
    Reported-by: Xu Kuohai <xukuohai@huaweicloud.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Reviewed-by: John Fastabend <john.fastabend@gmail.com>
    Link: https://lore.kernel.org/bpf/20230314203424.4015351-2-xukuohai@huaweicloud.com
    Link: https://lore.kernel.org/bpf/20230322213056.2470-1-daniel@iogearbox.net

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2023-09-22 09:12:19 +02:00
Artem Savkov fca9d3f8d3 selftests/bpf: enhance align selftest's expected log matching
Bugzilla: https://bugzilla.redhat.com/2221599

commit 6f876e75d316a75957f3d43c3a8c2a6fe9bc18b2
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Thu Mar 2 15:50:01 2023 -0800

    selftests/bpf: enhance align selftest's expected log matching
    
    Allow to search for expected register state in all the verifier log
    output that's related to specified instruction number.
    
    See added comment for an example of possible situation that is happening
    due to a simple enhancement done in the next patch, which fixes handling
    of env->test_state_freq flag in state checkpointing logic.
    
    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/r/20230302235015.2044271-4-andrii@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2023-09-22 09:12:11 +02:00
Jerome Marchand d24998a76a selftests/bpf: make test_align selftest more robust
Bugzilla: https://bugzilla.redhat.com/2177177

commit 4f999b767769b76378c3616c624afd6f4bb0d99f
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Fri Nov 4 09:36:49 2022 -0700

    selftests/bpf: make test_align selftest more robust

    test_align selftest relies on BPF verifier log emitting register states
    for specific instructions in expected format. Unfortunately, BPF
    verifier precision backtracking log interferes with such expectations.
    And instruction on which precision propagation happens sometimes don't
    output full expected register states. This does indeed look like
    something to be improved in BPF verifier, but is beyond the scope of
    this patch set.

    So to make test_align a bit more robust, inject few dummy R4 = R5
    instructions which capture desired state of R5 and won't have precision
    tracking logs on them. This fixes tests until we can improve BPF
    verifier output in the presence of precision tracking.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/r/20221104163649.121784-7-andrii@kernel.org
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2023-04-28 11:43:01 +02:00
Jerome Marchand 9753e95f4a bpf: Small BPF verifier log improvements
Bugzilla: https://bugzilla.redhat.com/2120966

commit 7df5072cc05fd1aab5823bbc465d033cd292fca8
Author: Mykola Lysenko <mykolal@fb.com>
Date:   Tue Mar 1 14:27:45 2022 -0800

    bpf: Small BPF verifier log improvements

    In particular these include:

      1) Remove output of inv for scalars in print_verifier_state
      2) Replace inv with scalar in verifier error messages
      3) Remove _value suffixes for umin/umax/s32_min/etc (except map_value)
      4) Remove output of id=0
      5) Remove output of ref_obj_id=0

    Signed-off-by: Mykola Lysenko <mykolal@fb.com>
    Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20220301222745.1667206-1-mykolal@fb.com

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
2022-10-25 14:57:55 +02:00
Artem Savkov 340f082561 bpf: Right align verifier states in verifier logs.
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 2e5766483c8c5cf886b4dc647a1741738dde7d79
Author: Christy Lee <christylee@fb.com>
Date:   Thu Dec 16 19:42:45 2021 -0800

    bpf: Right align verifier states in verifier logs.

    Make the verifier logs more readable, print the verifier states
    on the corresponding instruction line. If the previous line was
    not a bpf instruction, then print the verifier states on its own
    line.

    Before:

    Validating test_pkt_access_subprog3() func#3...
    86: R1=invP(id=0) R2=ctx(id=0,off=0,imm=0) R10=fp0
    ; int test_pkt_access_subprog3(int val, struct __sk_buff *skb)
    86: (bf) r6 = r2
    87: R2=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0)
    87: (bc) w7 = w1
    88: R1=invP(id=0) R7_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff))
    ; return get_skb_len(skb) * get_skb_ifindex(val, skb, get_constant(123));
    88: (bf) r1 = r6
    89: R1_w=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0)
    89: (85) call pc+9
    Func#4 is global and valid. Skipping.
    90: R0_w=invP(id=0)
    90: (bc) w8 = w0
    91: R0_w=invP(id=0) R8_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff))
    ; return get_skb_len(skb) * get_skb_ifindex(val, skb, get_constant(123));
    91: (b7) r1 = 123
    92: R1_w=invP123
    92: (85) call pc+65
    Func#5 is global and valid. Skipping.
    93: R0=invP(id=0)

    After:

    86: R1=invP(id=0) R2=ctx(id=0,off=0,imm=0) R10=fp0
    ; int test_pkt_access_subprog3(int val, struct __sk_buff *skb)
    86: (bf) r6 = r2                      ; R2=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0)
    87: (bc) w7 = w1                      ; R1=invP(id=0) R7_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff))
    ; return get_skb_len(skb) * get_skb_ifindex(val, skb, get_constant(123));
    88: (bf) r1 = r6                      ; R1_w=ctx(id=0,off=0,imm=0) R6_w=ctx(id=0,off=0,imm=0)
    89: (85) call pc+9
    Func#4 is global and valid. Skipping.
    90: R0_w=invP(id=0)
    90: (bc) w8 = w0                      ; R0_w=invP(id=0) R8_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff))
    ; return get_skb_len(skb) * get_skb_ifindex(val, skb, get_constant(123));
    91: (b7) r1 = 123                     ; R1_w=invP123
    92: (85) call pc+65
    Func#5 is global and valid. Skipping.
    93: R0=invP(id=0)

    Signed-off-by: Christy Lee <christylee@fb.com>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:49 +02:00
Artem Savkov cb4c537f4f bpf: Only print scratched registers and stack slots to verifier logs.
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit 0f55f9ed21f96630c6ec96805d42f92c0b458b37
Author: Christy Lee <christylee@fb.com>
Date:   Thu Dec 16 13:33:56 2021 -0800

    bpf: Only print scratched registers and stack slots to verifier logs.

    When printing verifier state for any log level, print full verifier
    state only on function calls or on errors. Otherwise, only print the
    registers and stack slots that were accessed.

    Log size differences:

    verif_scale_loop6 before: 234566564
    verif_scale_loop6 after: 72143943
    69% size reduction

    kfree_skb before: 166406
    kfree_skb after: 55386
    69% size reduction

    Before:

    156: (61) r0 = *(u32 *)(r1 +0)
    157: R0_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R1=ctx(id=0,off=0,imm=0) R2_w=invP0 R10=fp0 fp-8_w=00000000 fp-16_w=00\
    000000 fp-24_w=00000000 fp-32_w=00000000 fp-40_w=00000000 fp-48_w=00000000 fp-56_w=00000000 fp-64_w=00000000 fp-72_w=00000000 fp-80_w=00000\
    000 fp-88_w=00000000 fp-96_w=00000000 fp-104_w=00000000 fp-112_w=00000000 fp-120_w=00000000 fp-128_w=00000000 fp-136_w=00000000 fp-144_w=00\
    000000 fp-152_w=00000000 fp-160_w=00000000 fp-168_w=00000000 fp-176_w=00000000 fp-184_w=00000000 fp-192_w=00000000 fp-200_w=00000000 fp-208\
    _w=00000000 fp-216_w=00000000 fp-224_w=00000000 fp-232_w=00000000 fp-240_w=00000000 fp-248_w=00000000 fp-256_w=00000000 fp-264_w=00000000 f\
    p-272_w=00000000 fp-280_w=00000000 fp-288_w=00000000 fp-296_w=00000000 fp-304_w=00000000 fp-312_w=00000000 fp-320_w=00000000 fp-328_w=00000\
    000 fp-336_w=00000000 fp-344_w=00000000 fp-352_w=00000000 fp-360_w=00000000 fp-368_w=00000000 fp-376_w=00000000 fp-384_w=00000000 fp-392_w=\
    00000000 fp-400_w=00000000 fp-408_w=00000000 fp-416_w=00000000 fp-424_w=00000000 fp-432_w=00000000 fp-440_w=00000000 fp-448_w=00000000
    ; return skb->len;
    157: (95) exit
    Func#4 is safe for any args that match its prototype
    Validating get_constant() func#5...
    158: R1=invP(id=0) R10=fp0
    ; int get_constant(long val)
    158: (bf) r0 = r1
    159: R0_w=invP(id=1) R1=invP(id=1) R10=fp0
    ; return val - 122;
    159: (04) w0 += -122
    160: R0_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R1=invP(id=1) R10=fp0
    ; return val - 122;
    160: (95) exit
    Func#5 is safe for any args that match its prototype
    Validating get_skb_ifindex() func#6...
    161: R1=invP(id=0) R2=ctx(id=0,off=0,imm=0) R3=invP(id=0) R10=fp0
    ; int get_skb_ifindex(int val, struct __sk_buff *skb, int var)
    161: (bc) w0 = w3
    162: R0_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R1=invP(id=0) R2=ctx(id=0,off=0,imm=0) R3=invP(id=0) R10=fp0

    After:

    156: (61) r0 = *(u32 *)(r1 +0)
    157: R0_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R1=ctx(id=0,off=0,imm=0)
    ; return skb->len;
    157: (95) exit
    Func#4 is safe for any args that match its prototype
    Validating get_constant() func#5...
    158: R1=invP(id=0) R10=fp0
    ; int get_constant(long val)
    158: (bf) r0 = r1
    159: R0_w=invP(id=1) R1=invP(id=1)
    ; return val - 122;
    159: (04) w0 += -122
    160: R0_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff))
    ; return val - 122;
    160: (95) exit
    Func#5 is safe for any args that match its prototype
    Validating get_skb_ifindex() func#6...
    161: R1=invP(id=0) R2=ctx(id=0,off=0,imm=0) R3=invP(id=0) R10=fp0
    ; int get_skb_ifindex(int val, struct __sk_buff *skb, int var)
    161: (bc) w0 = w3
    162: R0_w=invP(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R3=invP(id=0)

    Signed-off-by: Christy Lee <christylee@fb.com>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Acked-by: Andrii Nakryiko <andrii@kernel.org>
    Link: https://lore.kernel.org/bpf/20211216213358.3374427-2-christylee@fb.com

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:49 +02:00
Artem Savkov 4a71970262 selftests/bpf: Convert legacy prog load APIs to bpf_prog_load()
Bugzilla: https://bugzilla.redhat.com/2069046

Upstream Status: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

commit d8e86407e5fc6c3da1e336f89bd3e9bbc1c0cf60
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Wed Nov 3 15:08:42 2021 -0700

    selftests/bpf: Convert legacy prog load APIs to bpf_prog_load()

    Convert all the uses of legacy low-level BPF program loading APIs
    (mostly bpf_load_program_xattr(), but also some bpf_verify_program()) to
    bpf_prog_load() uses.

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
    Signed-off-by: Alexei Starovoitov <ast@kernel.org>
    Link: https://lore.kernel.org/bpf/20211103220845.2676888-10-andrii@kernel.org

Signed-off-by: Artem Savkov <asavkov@redhat.com>
2022-08-24 12:53:33 +02:00
Jean-Philippe Brucker 3615bdf6d9 selftests/bpf: Fix "dubious pointer arithmetic" test
The verifier trace changed following a bugfix. After checking the 64-bit
sign, only the upper bit mask is known, not bit 31. Update the test
accordingly.

Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2020-12-10 13:11:30 -08:00
Alexei Starovoitov 75748837b7 bpf: Propagate scalar ranges through register assignments.
The llvm register allocator may use two different registers representing the
same virtual register. In such case the following pattern can be observed:
1047: (bf) r9 = r6
1048: (a5) if r6 < 0x1000 goto pc+1
1050: ...
1051: (a5) if r9 < 0x2 goto pc+66
1052: ...
1053: (bf) r2 = r9 /* r2 needs to have upper and lower bounds */

This is normal behavior of greedy register allocator.
The slides 137+ explain why regalloc introduces such register copy:
http://llvm.org/devmtg/2018-04/slides/Yatsina-LLVM%20Greedy%20Register%20Allocator.pdf
There is no way to tell llvm 'not to do this'.
Hence the verifier has to recognize such patterns.

In order to track this information without backtracking allocate ID
for scalars in a similar way as it's done for find_good_pkt_pointers().

When the verifier encounters r9 = r6 assignment it will assign the same ID
to both registers. Later if either register range is narrowed via conditional
jump propagate the register state into the other register.

Clear register ID in adjust_reg_min_max_vals() for any alu instruction. The
register ID is ignored for scalars in regsafe() and doesn't affect state
pruning. mark_reg_unknown() clears the ID. It's used to process call, endian
and other instructions. Hence ID is explicitly cleared only in
adjust_reg_min_max_vals() and in 32-bit mov.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20201009011240.48506-2-alexei.starovoitov@gmail.com
2020-10-09 22:03:06 +02:00
Stanislav Fomichev 3b09d27cc9 selftests/bpf: Move test_align under test_progs
There is a much higher chance we can see the regressions if the
test is part of test_progs.

Signed-off-by: Stanislav Fomichev <sdf@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200515194904.229296-2-sdf@google.com
2020-05-16 01:18:14 +02:00