18 Commits

Author SHA1 Message Date
31de58baa7 assume loops are finite
docs:
Assume that a loop with an exit will eventually take the exit and not loop indefinitely. This allows the compiler to remove loops that otherwise have no side-effects, not considering eventual endless looping as such.

This option is enabled by default at -O2 for C++ with -std=c++11 or higher.
2022-05-18 14:34:11 +02:00
cee2554f3f don't split LTO in multiple segments
The rationale of the default 'dynamic' is to do some multiprocessing in a first local optimisation step. This might lead to failures on link time (rare!) but is also not optimal.

More details:
https://gcc.gnu.org/onlinedocs/gccint/LTO-Overview.html
2022-05-18 14:34:11 +02:00
3dcf691597 enable vectorization of trees
docs:
Perform vectorization on trees. This flag enables -ftree-loop-vectorize and -ftree-slp-vectorize if not explicitly specified.
2022-05-18 14:34:11 +02:00
c8d28055ec Constructs webs
docs:
Constructs webs as commonly used for register allocation purposes and assign each web individual pseudo register. This allows the register allocation pass to operate on pseudos directly, but also strengthens several other optimization passes, such as CSE, loop optimizer and trivial dead code remover. It can, however, make debugging impossible, since variables no longer stay in a “home register”.
2022-05-18 14:34:11 +02:00
27e539fb28 Attempt to avoid false dependencies in scheduled code by making use of registers left over after register allocation
docs state that it should be active by default if -funroll-loops, which we don't use.

docs:
Attempt to avoid false dependencies in scheduled code by making use of registers left over after register allocation. This optimization most benefits processors with lots of registers. Depending on the debug information format adopted by the target, however, it can make debugging impossible, since variables no longer stay in a “home register”.

Enabled by default with -funroll-loops.
2022-05-18 14:34:11 +02:00
82f35b871a correcting model cost for vectorization of loops marked with the OpenMP simd directive
gcc's behaivor is again not following the documentation, which lets assume that it would use dynamic by default. By default gcc 12 uses here unlimited which does not do any estimation how costly it is vs the benefit.

Docs:
Alter the cost model used for vectorization of loops marked with the OpenMP simd directive. The model argument should be one of ‘unlimited’, ‘dynamic’, ‘cheap’. All values of model have the same meaning as described in -fvect-cost-model and by default a cost model defined with -fvect-cost-model is used.
2022-05-18 14:34:11 +02:00
9abe7a0c48 set regions for the integrated register allocator to the default value
the behaivor of gcc 12 isn't following the docs here. Regardless of the -O2/3 or -march=native/generic/x86-64-v3 it's always set to 'one'. Docs state it should be set to mixed if optimizations are on.

Docs:
Use specified regions for the integrated register allocator. The region argument should be one of the following:

‘all’

    Use all loops as register allocation regions. This can give the best results for machines with a small and/or irregular register set.
‘mixed’

    Use all loops except for loops with small register pressure as the regions. This value usually gives the best results in most cases and for most architectures, and is enabled by default when compiling with optimization for speed (-O, -O2, …).
‘one’

    Use all functions as a single region. This typically results in the smallest code size, and is enabled by default for -Os or -O0.
2022-05-18 14:34:11 +02:00
a8ed047a5e Detect paths that trigger erroneous or undefined behavior
docs:
Detect paths that trigger erroneous or undefined behavior due to a null value being used in a way forbidden by a returns_nonnull or nonnull attribute. Isolate those paths from the main control flow and turn the statement with erroneous or undefined behavior into a trap. This is not currently enabled, but may be enabled by -O2 in the future.
2022-05-18 14:34:11 +02:00
5792db8954 enable interprocedural pointer analysis and interprocedural modification and reference analysis
docs:
Perform interprocedural pointer analysis and interprocedural modification and reference analysis. This option can cause excessive memory and compile-time usage on large compilation units. It is not enabled by default at any optimization level.
2022-05-18 14:34:11 +02:00
2bdf571fc9 enable auto split of the stack before it overflows
docs:
Generate code to automatically split the stack before it overflows. The resulting program has a discontiguous stack which can only overflow if the program is unable to allocate any more memory. This is most useful when running threaded programs, as it is no longer necessary to calculate a good stack size to use for each thread. This is currently only implemented for the x86 targets running GNU/Linux.

When code compiled with -fsplit-stack calls code compiled without -fsplit-stack, there may not be much stack space available for the latter code to run. If compiling all code, including library code, with -fsplit-stack is not an option, then the linker can fix up these calls so that the code compiled without -fsplit-stack always has a large stack. Support for this is implemented in the gold linker in GNU binutils release 2.21 and later.
2022-05-18 14:34:11 +02:00
96e1ed1c7d enable graphite with identity transformation
docs:
Enable the identity transformation for graphite. For every SCoP we generate the polyhedral representation and transform it back to gimple. Using -fgraphite-identity we can check the costs or benefits of the GIMPLE -> GRAPHITE -> GIMPLE transformation. Some minimal optimizations are also performed by the code generator isl, like index splitting and dead code elimination in loops.
2022-05-18 14:34:11 +02:00
191d8915a9 allow dead exception code to be removed
docs:
Consider that instructions that may throw exceptions but don’t otherwise contribute to the execution of the program can be optimized away. This does not affect calls to functions except those with the pure or const attributes. This option is enabled by default for the Ada and C++ compilers, as permitted by the language specifications. Optimization passes that cause dead exceptions to be removed are enabled independently at different optimization levels.
2022-05-18 14:34:11 +02:00
0bb2d4b27a enable Swing Modulo Scheduling for basic innermost loop optimizations without blocking further rescheduling
docs:
SMS is intended to schedule instructions of loops rather than the traditional scheduler (in GCC) that does not give a special handling for loops. For more information on the theory behind SMS take a look at the 2004 GCC summit proceedings (page 55). This optimization helps in loops where there is a place to run consecutive iterations concurrently but the traditional instruction scheduling is not able to fully utilize the hardware functional units. This optimization is disabled by default because of compile time consumption; -fmodulo-sched activates it.

Source: https://gcc.gnu.org/news/sms.html
2022-05-18 14:34:11 +02:00
4c0a56b5f9 enable register pressure sensitive insn scheduling before register allocation
docs:
Enable register pressure sensitive insn scheduling before register allocation. This only makes sense when scheduling before register allocation is enabled, i.e. with -fschedule-insns or at -O2 or higher. Usage of this option can improve the generated code and decrease its size by preventing register pressure increase above the number of available hard registers and subsequent spills in register allocation.
2022-05-18 14:34:11 +02:00
bab81f1941 enable evaluate register pressure in loops for decisions to move loop invariants
This is enabled with -O3, but only for some targets. Seems to be off for x86_64.

docs:
Use IRA to evaluate register pressure in loops for decisions to move loop invariants. This option usually results in generation of faster and smaller code on machines with large register files (>= 32 registers), but it can slow the compiler down.
2022-05-18 14:34:11 +02:00
44ac1fd5c8 enable live range shrinkage to relief register pressure
docs:
I'd recommend to use this at
least for x86/x86-64.  I think any OOO processor with small or
moderate register file which does not use the 1st insn scheduling
might benefit from this too.

  On SPEC2000 for x86/x86-64 (I use Haswell processor, -O3 with
general tuning), the optimization usage results in smaller code size
in average (for floating point and integer benchmarks in 32- and
64-bit mode).  The improvement better visible for SPECFP2000 (although
I have the same improvement on x86-64 SPECInt2000 but it might be
attributed mostly mcf benchmark unstability).  It is about 0.5% for
32-bit and 64-bit mode.  It is understandable, as the optimization has
more opportunities to improve the code on longer BBs.  Different from
other heuristic optimizations, I don't see any significant worse
performance.  It gives practically the same or better performance (a
few benchmarks imporoved by 1% or more upto 3%).

  The single but significant drawback is additional compilation time
(4%-6%) as the 1st insn scheduling pass is quite expensive.

Source of docs: https://gcc.gnu.org/legacy-ml/gcc-patches/2013-11/msg00420.html
2022-05-18 14:34:11 +02:00
ed3a8ad5d8 enable elimination of redundant loads that come after stores to the same memory location
docs:
When -fgcse-las is enabled, the global common subexpression elimination pass eliminates redundant loads that come after stores to the same memory location (both partial and full redundancies).
2022-05-18 14:34:11 +02:00
fa46ae8c7c enable store motion pass run after global common subexpression elimination
docs:
When -fgcse-sm is enabled, a store motion pass is run after global common subexpression elimination. This pass attempts to move stores out of loops. When used in conjunction with -fgcse-lm, loops containing a load/store sequence can be changed to a load before the loop and a store after the loop.
2022-05-18 14:34:11 +02:00
38 changed files with 5310 additions and 4764 deletions

View File

@@ -1,94 +0,0 @@
linters-settings:
dupl:
threshold: 100
goconst:
min-len: 3
min-occurrences: 4
gocritic:
enabled-tags:
- diagnostic
- experimental
- opinionated
- performance
- style
disabled-checks:
- whyNoLint
- filepathJoin
mnd:
checks:
- argument
- case
- condition
- return
ignored-numbers:
- '0'
- '1'
- '2'
- '3'
- '4'
- '5'
- '6'
- '7'
- '8'
- '9'
- '10'
- '100'
- '1000'
ignored-functions:
- strings.SplitN
- os.OpenFile
- os.MkdirAll
- os.WriteFile
govet:
check-shadowing: false
lll:
line-length: 140
misspell:
locale: US
nolintlint:
allow-unused: false # report any unused nolint directives
require-explanation: false # don't require an explanation for nolint directives
require-specific: false # don't require nolint directives to be specific about which linter is being skipped
tagliatelle:
case:
use-field-name: true
rules:
# Any struct tag type can be used.
# Support string case: `camel`, `pascal`, `kebab`, `snake`, `upperSnake`, `goCamel`, `goPascal`, `goKebab`, `goSnake`, `upper`, `lower`, `header`.
json: snake
yaml: snake
xml: camel
linters:
enable-all: true
disable:
- gochecknoglobals
- depguard
- gci
- gofumpt
- goimports
- varnamelen
- funlen
- cyclop
- wsl
- nosnakecase
- nlreturn
- godot
- nestif
- wrapcheck
- gocognit
- gocyclo
- maintidx
- nonamedreturns
- exhaustivestruct
- exhaustruct
- forcetypeassert
- godox
- nakedret
- tagalign
- maligned
# remove for new projects
- errname
- goerr113
- depguard
- noctx

151
README.md
View File

@@ -1,41 +1,27 @@
# ALHP
[![](https://img.shields.io/badge/package-status-informational?style=flat-square)](https://status.alhp.dev)
[![](https://goreportcard.com/badge/somegit.dev/ALHP/ALHP.GO?style=flat-square)](https://goreportcard.com/report/somegit.dev/ALHP/ALHP.GO)
[![](https://pkg.go.dev/badge/somegit.dev/ALHP/ALHP.GO)](https://pkg.go.dev/somegit.dev/ALHP/ALHP.GO)
[![](https://img.shields.io/badge/license-GPL-blue?style=flat-square)](https://somegit.dev/anonfunc/ALHP.GO/src/branch/master/LICENSE)
[![](https://img.shields.io/badge/license-GPL-blue?style=flat-square)](https://git.harting.dev/anonfunc/ALHP.GO/src/branch/master/LICENSE)
[![](https://img.shields.io/badge/package-status-informational?style=flat-square)](https://alhp.anonfunc.dev/packages.html)
[![](https://goreportcard.com/badge/git.harting.dev/ALHP/ALHP.GO?style=flat-square)](https://goreportcard.com/report/git.harting.dev/ALHP/ALHP.GO)
[![](https://pkg.go.dev/badge/git.harting.dev/ALHP/ALHP.GO)](https://pkg.go.dev/git.harting.dev/ALHP/ALHP.GO)
[![](https://img.shields.io/liberapay/patrons/anonfunc.svg?logo=liberapay&style=flat-square)](https://liberapay.com/anonfunc/)
Buildbot for Archlinux based repos with different
Buildbot for Archlinux-based repos build with different
[x86-64 feature levels](https://www.phoronix.com/scan.php?page=news_item&px=GCC-11-x86-64-Feature-Levels), `-O3` and
[LTO](https://en.wikipedia.org/wiki/Interprocedural_optimization).
> [!WARNING]
> NVIDIA graphics users using the **proprietary driver** are strongly encouraged to read the
> [FAQ about Linux kernel modules](#directly-linked-kernel-modules) before enabling any repos.
> ⚠️ NVIDIA graphic users using the **proprietary driver** is highly recommended reading the
> [FAQ about Linux kernel modules](#directly-linked-kernel-modules) ⚠️
---
<!-- TOC -->
* [Quick Start](#quick-start)
* [FAQ](#faq)
* [Matrix](#matrix)
* [Donations](#donations)
* [License and Legal](#license-and-legal)
<!-- TOC -->
---
## Quick Start
## Quickstart
### 1. Check your system for support
> [!CAUTION]
> Before enabling any of these repos, make sure that your system supports the level of functionality you want to
> enable (e.g. `x86-64-v3`).
> **If you don't check first, you may not be able to boot your system and will have to downgrade any packages you may
have upgraded.**
> **Important**: Before you enable any of these repos, check if your system supports the feature level you want to enable
(e.g. `x86-64-v3`).
> **If you don't check beforehand, you might be unable to boot your system anymore and need to downgrade any package that you may have upgraded.**
Check which feature levels your CPU supports with
Check which feature-levels your CPU supports with
```bash
/lib/ld-linux-x86-64.so.2 --help
@@ -50,14 +36,10 @@ Subdirectories of glibc-hwcaps directories, in priority order:
x86-64-v2 (supported, searched)
```
> [!NOTE]
> ALHP repos for `x86-64-v2`, `x86-64-v3` and `x86-64-v4` are currently available. You can see all available
> repositories [here](https://alhp.dev/).
### 2. Install keyring & mirrorlist
Install [alhp-keyring](https://aur.archlinux.org/packages/alhp-keyring/)
and [alhp-mirrorlist](https://aur.archlinux.org/packages/alhp-mirrorlist/) from the **AUR**.
and [alhp-mirrorlist](https://aur.archlinux.org/packages/alhp-mirrorlist/) from **AUR**.
Example with `yay`:
@@ -69,16 +51,15 @@ yay -S alhp-keyring alhp-mirrorlist
### 3. Choose a mirror (optional)
Edit `/etc/pacman.d/alhp-mirrorlist` and comment in/out the mirrors you want to enable/disable.
By default, a CDN mirror provided by ALHP is selected.
> [!NOTE]
> `cdn.alhp.dev` and `alhp.dev` are provided directly by ALHP. If you have problems with a mirror,
> open an issue at [the mirrorlist repo](https://somegit.dev/ALHP/alhp-mirrorlist).
Edit `/etc/pacman.d/alhp-mirrorlist` and comment out/in mirrors you want to have enabled/disabled. Per default selected
is a cloudflare-based mirror which
[*should* provide decent speed worldwide](https://git.harting.dev/ALHP/ALHP.GO/issues/38#issuecomment-891).
> Note: Only `alhp.harting.dev` is hosted by ALHP directly. If you have problems with a mirror,
> open an issue at [the mirrorlist repo](https://git.harting.dev/ALHP/alhp-mirrorlist).
### 4. Modify pacman.conf
### 4. Modify /etc/pacman.conf
Add the ALHP repos to your `/etc/pacman.conf`. Make sure the appropriate ALHP repository is **above** the Archlinux
repo.
Add the appropriate repos **above** your regular Archlinux repos.
Example for `x86-64-v3`:
@@ -86,112 +67,81 @@ Example for `x86-64-v3`:
[core-x86-64-v3]
Include = /etc/pacman.d/alhp-mirrorlist
[core]
Include = /etc/pacman.d/mirrorlist
[extra-x86-64-v3]
Include = /etc/pacman.d/alhp-mirrorlist
[community-x86-64-v3]
Include = /etc/pacman.d/alhp-mirrorlist
[core]
Include = /etc/pacman.d/mirrorlist
[extra]
Include = /etc/pacman.d/mirrorlist
# if you need [multilib] support
[multilib-x86-64-v3]
Include = /etc/pacman.d/alhp-mirrorlist
[multilib]
[community]
Include = /etc/pacman.d/mirrorlist
```
Replace `x86-64-v3` with the x86-64 feature level you want to enable.
> ALHP only builds for `x86-64-v3` and `x86-64-v2` at the moment (list is subject to change). You can see all available repositories
> [here](https://alhp.harting.dev/).
> [!TIP]
> Multiple layers can be stacked as described in https://somegit.dev/ALHP/ALHP.GO/issues/255#issuecomment-3335.
### 5. Update package database and upgrade
### 5. Update package database and upgrade:
```
pacman -Suy
```
## FAQ
## How to disable
### Remove ALHP packages
To disable ALHP, remove all *x86-64-vX* entries in `/etc/pacman.conf` and remove `alhp-keyring` and `alhp-mirrorlist`.
After that, you can update pacman's databases and downgrade all packages, like
To disable ALHP remove all *x86-64-vX* entries in `/etc/pacman.conf` and remove `alhp-keyring` and `alhp-mirrorlist`.
After that you can refresh pacmans databases and downgrade all packages like:
```
pacman -Suuy
```
## FAQ
### LTO
Enabled for all packages built after 04 Nov 2021 12:07:00
UTC. [More details.](https://somegit.dev/ALHP/ALHP.GO/issues/52)
Enabled for all packages build after 04 Nov 2021 12:07:00
UTC. [More details.](https://git.harting.dev/anonfunc/ALHP.GO/issues/52)
LTO status is visible per package on the package status page.
### Linux Kernel packages
### Linux Kernel
`KCFLAGS`/`KCPPFLAGS` are used to build the kernel packages with our additional flags.
ALHP provides patched kernels (except `linux-zen`) that build with `-march=x86-64-vN`. Thanks to
[graysky](https://github.com/graysky2) for providing [these patches](https://github.com/graysky2/kernel_compiler_patch).
### Directly linked kernel modules
Due to our increase in pkgrel, building the kernel packages **will break any directly linked modules** such as `nvidia`
(not `nvidia-dkms`) or `virtualbox-host-modules-arch` (not `virtualbox-host-dkms`). **Their respective `dkms`-variant is
not affected**. This issue is being tracked in #68, a solution is being worked on.
**Above-mentioned patching breaks all directly linked modules** like `nvidia` (not `nvidia-dkms`) or
`virtualbox-host-modules-arch` (not `virtualbox-host-dkms`). **Their respective `dkms`-variant is not affected**. This
issue is being tracked in #68, a solution is being worked on.
### Mirrors
You want to mirror ALHP? You are welcome to do
so, [see alhp-mirrorlist for how to become one](https://somegit.dev/ALHP/alhp-mirrorlist#how-to-become-a-mirror).
so, [see alhp-mirrorlist for how to become one](https://git.harting.dev/ALHP/alhp-mirrorlist#how-to-become-a-mirror).
### What packages are built
Packages [excluded](https://www.reddit.com/r/archlinux/comments/oflged/alhp_archlinux_recompiled_for_x8664v3_experimental/h4fkinu?utm_source=share&utm_medium=web2x&context=3)
from building (besides all `any` architecture packages) are being listed in issue #16.
See also [package status page](https://status.alhp.dev) (search for `blacklisted`).
### Why is package X not up-to-date
Also relevant for: **I can't find package X / Application X fails to start because it links to an old/newer lib**
ALHP builds packages **after** they are released in the official Archlinux repos (excluding `[*-testing]`).
This will cause packages to be delayed if the current batch contains many packages, or packages that take a while to
build (e.g. `chromium`).
You can always check on the progress of the current build cycle on the [package status page](https://status.alhp.dev).
Please refrain from opening issues caused by packages currently in queue/not yet build/not yet moved to the repo.
Please keep in mind that large rebuilds such as `openssl` or `python` can take days to complete on our current build
hardware.
from building (besides all 'any' architecture packages) are being listed in issue #16.
Also [package status page](https://alhp.anonfunc.dev/packages.html).
### Debug symbols
ALHP provides a debuginfod instance under `debuginfod.alhp.dev`.
ALHP provides a debuginfod instance under `debuginfod.harting.dev`.
To use it, have `debuginfod` installed on your system and add it to your `DEBUGINFOD_URLS` with:
```bash
echo "https://debuginfod.alhp.dev" > /etc/debuginfod/alhp.urls
echo "https://debuginfod.harting.dev" > /etc/debuginfod/alhp.urls
```
### Switch between levels
If you want to switch between levels, e.g. from `x86-64-v3` to `x86-64-v4`, you need to revert to official packages
first, and then enable your desired repos again.
1. Comment out or remove the ALHP repo entries in `/etc/pacman.conf`.
2. Downgrade packages with `pacman -Suuy`.
3. Clear pacman's package cache with `pacman -Scc`.
4. Uncomment/add your desired repos to `/etc/pacman.conf` and update with `pacman -Suy`.
## Matrix
For any non-issue questions, or if you just want to chat, ALHP has a Matrix
room [here](https://matrix.to/#/#alhp:ofsg.eu) (`#alhp@ofsg.eu`). You can also find me (@idlegandalf)
in `#archlinux:archlinux.org`.
## Donations
I appreciate any money you want to throw my way, but donations are strictly optional. Donations are primarily used to
@@ -199,8 +149,3 @@ pay for server costs. Also consider [donating to the **Archlinux Team**](https:/
work ALHP would not be possible.
[![Donate using Liberapay](https://liberapay.com/assets/widgets/donate.svg)](https://liberapay.com/anonfunc/)
## License and Legal
This project and all of its source code is released under the terms of the GNU General Public License, version 2
or any later version. See [LICENSE](https://somegit.dev/ALHP/ALHP.GO/src/branch/master/LICENSE) for details.

View File

@@ -1,5 +1,5 @@
[Unit]
Description=Go based Archlinux instruction-set enabled repo build manager.
Description=Go based Archlinux instructionset enabled repo build manager.
After=network.target
[Service]

View File

@@ -1,480 +0,0 @@
package main
import (
"context"
"errors"
"fmt"
"github.com/c2h5oh/datasize"
"github.com/prometheus/client_golang/prometheus"
"github.com/sethvargo/go-retry"
log "github.com/sirupsen/logrus"
"os"
"os/exec"
"path/filepath"
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
"strings"
"sync"
"time"
)
const MaxUnknownBuilder = 2
type BuildManager struct {
repoPurge map[string]chan []*ProtoPackage
repoAdd map[string]chan []*ProtoPackage
repoWG *sync.WaitGroup
alpmMutex *sync.RWMutex
building []*ProtoPackage
buildingLock *sync.RWMutex
queueSignal chan struct{}
metrics struct {
queueSize *prometheus.GaugeVec
}
}
func (b *BuildManager) buildQueue(ctx context.Context, queue []*ProtoPackage) error {
var (
doneQ []*ProtoPackage
doneQLock = new(sync.RWMutex)
unknownBuilds bool
queueNoMatch bool
)
for len(doneQ) != len(queue) {
up := 0
b.buildingLock.RLock()
if (pkgList2MaxMem(b.building) < conf.Build.MemoryLimit &&
!unknownBuilds && !queueNoMatch) ||
(unknownBuilds && len(b.building) < MaxUnknownBuilder) {
queueNoMatch = true
b.buildingLock.RUnlock()
for _, pkg := range queue {
// check if package is already build
doneQLock.RLock()
if ContainsPkg(doneQ, pkg, true) {
doneQLock.RUnlock()
continue
}
doneQLock.RUnlock()
// check if package is already building (we do not build packages from different marchs simultaneously)
b.buildingLock.RLock()
if ContainsPkg(b.building, pkg, false) {
log.Debugf("[Q] skipped already building package %s->%s", pkg.FullRepo, pkg.Pkgbase)
b.buildingLock.RUnlock()
continue
}
b.buildingLock.RUnlock()
// only check for memory on known-memory-builds
// otherwise build them one-at-a-time
// TODO: add initial compile mode for new repos
if !unknownBuilds {
// check if package has unknown memory usage
if pkg.DBPackage.MaxRss == nil {
log.Debugf("[Q] skipped unknown package %s->%s", pkg.FullRepo, pkg.Pkgbase)
up++
continue
}
// check if package can be built with current memory limit
if datasize.ByteSize(*pkg.DBPackage.MaxRss)*datasize.KB > conf.Build.MemoryLimit { //nolint:gosec
log.Warningf("[Q] %s->%s exeeds memory limit: %s->%s", pkg.FullRepo, pkg.Pkgbase,
datasize.ByteSize(*pkg.DBPackage.MaxRss)*datasize.KB, conf.Build.MemoryLimit) //nolint:gosec
doneQLock.Lock()
doneQ = append(doneQ, pkg)
doneQLock.Unlock()
b.metrics.queueSize.WithLabelValues(pkg.FullRepo, "queued").Dec()
continue
}
b.buildingLock.RLock()
currentMemLoad := pkgList2MaxMem(b.building)
b.buildingLock.RUnlock()
// check if package can be build right now
if currentMemLoad+(datasize.ByteSize(*pkg.DBPackage.MaxRss)*datasize.KB) > conf.Build.MemoryLimit { //nolint:gosec
log.Debugf("[Q] skipped package with max_rss %s while load %s: %s->%s",
datasize.ByteSize(*pkg.DBPackage.MaxRss)*datasize.KB, currentMemLoad, pkg.Pkgbase, pkg.March) //nolint:gosec
continue
}
} else {
b.buildingLock.RLock()
if len(b.building) >= MaxUnknownBuilder {
b.buildingLock.RUnlock()
continue
}
b.buildingLock.RUnlock()
}
b.buildingLock.Lock()
b.building = append(b.building, pkg)
b.buildingLock.Unlock()
queueNoMatch = false
b.metrics.queueSize.WithLabelValues(pkg.FullRepo, "queued").Dec()
b.metrics.queueSize.WithLabelValues(pkg.FullRepo, "building").Inc()
go func(pkg *ProtoPackage) {
dur, err := pkg.build(ctx)
b.metrics.queueSize.WithLabelValues(pkg.FullRepo, "building").Dec()
if err != nil && !errors.Is(err, ErrorNotEligible) {
log.Warningf("[Q] error building package %s->%s in %s: %s", pkg.FullRepo, pkg.Pkgbase, dur, err)
b.repoPurge[pkg.FullRepo] <- []*ProtoPackage{pkg}
} else if err == nil {
log.Infof("[Q] build successful: %s->%s (%s)", pkg.FullRepo, pkg.Pkgbase, dur)
b.metrics.queueSize.WithLabelValues(pkg.FullRepo, "built").Inc()
}
doneQLock.Lock()
b.buildingLock.Lock()
doneQ = append(doneQ, pkg)
for i := 0; i < len(b.building); i++ {
if b.building[i].PkgbaseEquals(pkg, true) {
b.building = append(b.building[:i], b.building[i+1:]...)
break
}
}
doneQLock.Unlock()
b.buildingLock.Unlock()
b.queueSignal <- struct{}{}
}(pkg)
}
} else {
log.Debugf("[Q] memory/build limit reached, waiting for package to finish...")
b.buildingLock.RUnlock()
<-b.queueSignal
queueNoMatch = false
}
// if only unknown packages are left, enable unknown buildmode
b.buildingLock.RLock()
if up == len(queue)-(len(doneQ)+len(b.building)) {
unknownBuilds = true
}
b.buildingLock.RUnlock()
}
return nil
}
func (b *BuildManager) repoWorker(ctx context.Context, repo string) {
for {
select {
case pkgL := <-b.repoAdd[repo]:
b.repoWG.Add(1)
toAdd := make([]string, 0)
for _, pkg := range pkgL {
toAdd = append(toAdd, pkg.PkgFiles...)
}
args := []string{"-s", "-v", "-p", "-n", filepath.Join(conf.Basedir.Repo, repo, "os", conf.Arch, repo) + ".db.tar.xz"}
args = append(args, toAdd...)
cmd := exec.CommandContext(ctx, "repo-add", args...)
res, err := cmd.CombinedOutput()
log.Debug(string(res))
if err != nil && cmd.ProcessState.ExitCode() != 1 {
log.Panicf("%s while repo-add: %v", string(res), err)
}
for _, pkg := range pkgL {
err = pkg.toDBPackage(ctx, true)
if err != nil {
log.Warningf("error getting db entry for %s: %v", pkg.Pkgbase, err)
continue
}
pkgUpd := pkg.DBPackage.Update().
SetStatus(dbpackage.StatusLatest).
ClearSkipReason().
SetRepoVersion(pkg.Version).
SetTagRev(pkg.State.TagRev)
if _, err := os.Stat(filepath.Join(conf.Basedir.Debug, pkg.March,
pkg.DBPackage.Packages[0]+"-debug-"+pkg.Version+"-"+conf.Arch+".pkg.tar.zst")); err == nil {
pkgUpd = pkgUpd.SetDebugSymbols(dbpackage.DebugSymbolsAvailable)
} else {
pkgUpd = pkgUpd.SetDebugSymbols(dbpackage.DebugSymbolsNotAvailable)
}
if pkg.DBPackage, err = pkgUpd.Save(ctx); err != nil {
log.Error(err)
}
}
cmd = exec.CommandContext(ctx, "paccache", "-rc", filepath.Join(conf.Basedir.Repo, repo, "os", conf.Arch), "-k", "1") //nolint:gosec
res, err = cmd.CombinedOutput()
log.Debug(string(res))
if err != nil {
log.Warningf("error running paccache: %v", err)
}
err = updateLastUpdated()
if err != nil {
log.Warningf("error updating lastupdate: %v", err)
}
b.repoWG.Done()
case pkgL := <-b.repoPurge[repo]:
for _, pkg := range pkgL {
if _, err := os.Stat(filepath.Join(conf.Basedir.Repo, pkg.FullRepo, "os", conf.Arch, pkg.FullRepo) + ".db.tar.xz"); err != nil {
continue
}
if len(pkg.PkgFiles) == 0 {
if err := pkg.findPkgFiles(); err != nil {
log.Warningf("[%s/%s] unable to find files: %v", pkg.FullRepo, pkg.Pkgbase, err)
continue
} else if len(pkg.PkgFiles) == 0 {
if pkg.DBPackage != nil {
err = pkg.DBPackage.Update().ClearRepoVersion().ClearTagRev().Exec(ctx)
if err != nil {
log.Error(err)
}
}
continue
}
}
var realPkgs []string
for _, filePath := range pkg.PkgFiles {
if _, err := os.Stat(filePath); err == nil {
realPkgs = append(realPkgs, Package(filePath).Name())
}
}
if len(realPkgs) == 0 {
continue
}
b.repoWG.Add(1)
args := []string{"-s", "-v", filepath.Join(conf.Basedir.Repo, pkg.FullRepo, "os", conf.Arch, pkg.FullRepo) + ".db.tar.xz"}
args = append(args, realPkgs...)
cmd := exec.CommandContext(ctx, "repo-remove", args...)
res, err := cmd.CombinedOutput()
log.Debug(string(res))
if err != nil && cmd.ProcessState.ExitCode() == 1 {
log.Warningf("error while deleting package %s: %s", pkg.Pkgbase, string(res))
}
if pkg.DBPackage != nil {
err = pkg.DBPackage.Update().ClearRepoVersion().ClearTagRev().Exec(ctx)
if err != nil {
log.Error(err)
}
}
for _, file := range pkg.PkgFiles {
_ = os.Remove(file)
_ = os.Remove(file + ".sig")
}
err = updateLastUpdated()
if err != nil {
log.Warningf("error updating lastupdate: %v", err)
}
b.repoWG.Done()
}
}
}
}
func (b *BuildManager) syncWorker(ctx context.Context) error {
err := os.MkdirAll(conf.Basedir.Work, 0o755)
if err != nil {
log.Fatalf("error creating work dir %s: %v", conf.Basedir.Work, err)
}
gitPath := filepath.Join(conf.Basedir.Work, stateDir)
for {
if _, err := os.Stat(gitPath); os.IsNotExist(err) {
cmd := exec.CommandContext(ctx, "git", "clone", "--depth=1", conf.StateRepo, gitPath) //nolint:gosec
res, err := cmd.CombinedOutput()
log.Debug(string(res))
if err != nil {
log.Fatalf("error cloning state repo: %v", err)
}
} else if err == nil {
cmd := exec.CommandContext(ctx, "git", "reset", "--hard")
cmd.Dir = gitPath
res, err := cmd.CombinedOutput()
log.Debug(string(res))
if err != nil {
log.Fatalf("error reseting state repo: %v", err)
}
cmd = exec.CommandContext(ctx, "git", "pull")
cmd.Dir = gitPath
res, err = cmd.CombinedOutput()
log.Debug(string(res))
if err != nil {
log.Warningf("failed to update state repo: %v", err)
}
}
// housekeeping
wg := new(sync.WaitGroup)
for _, repo := range repos {
wg.Add(1)
splitRepo := strings.Split(repo, "-")
go func() {
err := housekeeping(ctx, splitRepo[0], strings.Join(splitRepo[1:], "-"), wg)
if err != nil {
log.Warningf("[%s] housekeeping failed: %v", repo, err)
}
}()
}
wg.Wait()
err := logHK(ctx)
if err != nil {
log.Warningf("log-housekeeping failed: %v", err)
}
debugHK()
// fetch updates between sync runs
b.alpmMutex.Lock()
err = alpmHandle.Release()
if err != nil {
log.Fatalf("error releasing ALPM handle: %v", err)
}
if err := retry.Fibonacci(ctx, 1*time.Second, func(_ context.Context) error {
if err := setupChroot(ctx); err != nil {
log.Warningf("unable to upgrade chroot, trying again later")
return retry.RetryableError(err)
}
return nil
}); err != nil {
log.Fatal(err)
}
alpmHandle, err = initALPM(filepath.Join(conf.Basedir.Work, chrootDir, pristineChroot),
filepath.Join(conf.Basedir.Work, chrootDir, pristineChroot, "/var/lib/pacman"))
if err != nil {
log.Warningf("error while alpm-init: %v", err)
}
b.alpmMutex.Unlock()
queue, err := b.genQueue(ctx)
if err != nil {
log.Errorf("error building queue: %v", err)
return err
}
log.Debugf("build-queue with %d items", len(queue))
err = b.buildQueue(ctx, queue)
if err != nil {
return err
}
if ctx.Err() == nil {
for _, repo := range repos {
err = movePackagesLive(ctx, repo)
if err != nil {
log.Errorf("[%s] error moving packages live: %v", repo, err)
}
}
} else {
return ctx.Err()
}
b.metrics.queueSize.Reset()
log.Debugf("build-cycle finished")
time.Sleep(time.Duration(*checkInterval) * time.Minute)
}
}
func (b *BuildManager) genQueue(ctx context.Context) ([]*ProtoPackage, error) {
stateFiles, err := Glob(filepath.Join(conf.Basedir.Work, stateDir, "**/*"))
if err != nil {
return nil, fmt.Errorf("error scanning for state-files: %w", err)
}
var pkgbuilds []*ProtoPackage
for _, stateFile := range stateFiles {
stat, err := os.Stat(stateFile)
if err != nil || stat.IsDir() || strings.Contains(stateFile, ".git") || strings.Contains(stateFile, "README.md") {
continue
}
repo, subRepo, arch, err := stateFileMeta(stateFile)
if err != nil {
log.Warningf("[QG] error generating statefile metadata %s: %v", stateFile, err)
continue
}
if !Contains(conf.Repos, repo) || (subRepo != nil && Contains(conf.Blacklist.Repo, *subRepo)) {
continue
}
rawState, err := os.ReadFile(stateFile)
if err != nil {
log.Warningf("[QG] cannot read statefile %s: %v", stateFile, err)
continue
}
state, err := parseState(string(rawState))
if err != nil {
log.Warningf("[QG] cannot parse statefile %s: %v", stateFile, err)
continue
}
for _, march := range conf.March {
pkg := &ProtoPackage{
Pkgbase: state.Pkgbase,
Repo: dbpackage.Repository(repo),
March: march,
FullRepo: repo + "-" + march,
State: state,
Version: state.PkgVer,
Arch: arch,
}
err = pkg.toDBPackage(ctx, false)
if err != nil {
log.Warningf("[QG] error getting/creating dbpackage %s: %v", state.Pkgbase, err)
continue
}
if !pkg.isAvailable(ctx, alpmHandle) {
log.Debugf("[QG] %s->%s not available on mirror, skipping build", pkg.FullRepo, pkg.Pkgbase)
continue
}
aBuild, err := pkg.IsBuilt()
if err != nil {
log.Warningf("[QG] %s->%s error determining built packages: %v", pkg.FullRepo, pkg.Pkgbase, err)
}
if aBuild {
log.Infof("[QG] %s->%s already built, skipping build", pkg.FullRepo, pkg.Pkgbase)
continue
}
if pkg.DBPackage == nil {
err = pkg.toDBPackage(ctx, true)
if err != nil {
log.Warningf("[QG] error getting/creating dbpackage %s: %v", state.Pkgbase, err)
continue
}
}
if pkg.DBPackage.TagRev != nil && *pkg.DBPackage.TagRev == state.TagRev {
continue
}
// try download .SRCINFO from repo
srcInfo, err := downloadSRCINFO(pkg.DBPackage.Pkgbase, state.TagRev)
if err == nil {
pkg.Srcinfo = srcInfo
}
if !pkg.isEligible(ctx) {
continue
}
pkg.DBPackage, err = pkg.DBPackage.Update().SetStatus(dbpackage.StatusQueued).Save(ctx)
if err != nil {
log.Warningf("[QG] error updating dbpackage %s: %v", state.Pkgbase, err)
}
pkgbuilds = append(pkgbuilds, pkg)
b.metrics.queueSize.WithLabelValues(pkg.FullRepo, "queued").Inc()
}
}
return pkgbuilds, nil
}

View File

@@ -2,10 +2,24 @@ arch: x86_64
repos:
- core
- extra
- community
state_repo: "https://gitlab.archlinux.org/archlinux/packaging/state.git"
svn2git:
upstream-core-extra: "https://github.com/archlinux/svntogit-packages.git"
upstream-community: "https://github.com/archlinux/svntogit-community.git"
max_clone_retries: 100
kernel_to_patch:
- linux
- linux-lts
- linux-zen
- linux-hardened
kernel_patches:
4.19: "https://raw.githubusercontent.com/graysky2/kernel_compiler_patch/master/more-uarches-for-kernel-4.19-5.4.patch"
5.5: "none"
5.8: "https://raw.githubusercontent.com/graysky2/kernel_compiler_patch/master/more-uarches-for-kernel-5.8-5.14.patch"
5.15: "https://raw.githubusercontent.com/graysky2/kernel_compiler_patch/master/more-uarches-for-kernel-5.15%2B.patch"
linux-zen: "skip"
db:
driver: pgx
@@ -33,18 +47,23 @@ blacklist:
- llvm
- rust
status:
class:
skipped: "secondary"
queued: "warning"
latest: "primary"
failed: "danger"
signing: "success"
building: "info"
unknown: "dark"
build:
# number of workers total
worker: 4
makej: 8
checks: true
# how much memory ALHP should use
# this will also decide how many builds will run concurrently,
# since ALHP will try to optimise the queue for speed while not going over this limit
memory_limit: "16gb"
# builds over this threshold are considered slow (in cpu-time-seconds)
slow_queue_threshold: 14400.0
logging:
level: INFO
metrics:
port: 9568

View File

@@ -1,20 +1,18 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
"errors"
"fmt"
"log"
"reflect"
"somegit.dev/ALHP/ALHP.GO/ent/migrate"
"git.harting.dev/ALHP/ALHP.GO/ent/migrate"
"git.harting.dev/ALHP/ALHP.GO/ent/dbpackage"
"entgo.io/ent"
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql"
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
)
// Client is the client that holds all ent builders.
@@ -22,76 +20,22 @@ type Client struct {
config
// Schema is the client for creating, migrating and dropping schema.
Schema *migrate.Schema
// DBPackage is the client for interacting with the DBPackage builders.
DBPackage *DBPackageClient
// DbPackage is the client for interacting with the DbPackage builders.
DbPackage *DbPackageClient
}
// NewClient creates a new client configured with the given options.
func NewClient(opts ...Option) *Client {
client := &Client{config: newConfig(opts...)}
cfg := config{log: log.Println, hooks: &hooks{}}
cfg.options(opts...)
client := &Client{config: cfg}
client.init()
return client
}
func (c *Client) init() {
c.Schema = migrate.NewSchema(c.driver)
c.DBPackage = NewDBPackageClient(c.config)
}
type (
// config is the configuration for the client and its builder.
config struct {
// driver used for executing database requests.
driver dialect.Driver
// debug enable a debug logging.
debug bool
// log used for logging on debug mode.
log func(...any)
// hooks to execute on mutations.
hooks *hooks
// interceptors to execute on queries.
inters *inters
}
// Option function to configure the client.
Option func(*config)
)
// newConfig creates a new config for the client.
func newConfig(opts ...Option) config {
cfg := config{log: log.Println, hooks: &hooks{}, inters: &inters{}}
cfg.options(opts...)
return cfg
}
// options applies the options on the config object.
func (c *config) options(opts ...Option) {
for _, opt := range opts {
opt(c)
}
if c.debug {
c.driver = dialect.Debug(c.driver, c.log)
}
}
// Debug enables debug logging on the ent.Driver.
func Debug() Option {
return func(c *config) {
c.debug = true
}
}
// Log sets the logging function for debug mode.
func Log(fn func(...any)) Option {
return func(c *config) {
c.log = fn
}
}
// Driver configures the client driver.
func Driver(driver dialect.Driver) Option {
return func(c *config) {
c.driver = driver
}
c.DbPackage = NewDbPackageClient(c.config)
}
// Open opens a database/sql.DB specified by the driver name and
@@ -110,14 +54,11 @@ func Open(driverName, dataSourceName string, options ...Option) (*Client, error)
}
}
// ErrTxStarted is returned when trying to start a new transaction from a transactional client.
var ErrTxStarted = errors.New("ent: cannot start a transaction within a transaction")
// Tx returns a new transactional client. The provided context
// is used until the transaction is committed or rolled back.
func (c *Client) Tx(ctx context.Context) (*Tx, error) {
if _, ok := c.driver.(*txDriver); ok {
return nil, ErrTxStarted
return nil, fmt.Errorf("ent: cannot start a transaction within a transaction")
}
tx, err := newTx(ctx, c.driver)
if err != nil {
@@ -128,14 +69,14 @@ func (c *Client) Tx(ctx context.Context) (*Tx, error) {
return &Tx{
ctx: ctx,
config: cfg,
DBPackage: NewDBPackageClient(cfg),
DbPackage: NewDbPackageClient(cfg),
}, nil
}
// BeginTx returns a transactional client with specified options.
func (c *Client) BeginTx(ctx context.Context, opts *sql.TxOptions) (*Tx, error) {
if _, ok := c.driver.(*txDriver); ok {
return nil, errors.New("ent: cannot start a transaction within a transaction")
return nil, fmt.Errorf("ent: cannot start a transaction within a transaction")
}
tx, err := c.driver.(interface {
BeginTx(context.Context, *sql.TxOptions) (dialect.Tx, error)
@@ -148,16 +89,17 @@ func (c *Client) BeginTx(ctx context.Context, opts *sql.TxOptions) (*Tx, error)
return &Tx{
ctx: ctx,
config: cfg,
DBPackage: NewDBPackageClient(cfg),
DbPackage: NewDbPackageClient(cfg),
}, nil
}
// Debug returns a new debug-client. It's used to get verbose logging on specific operations.
//
// client.Debug().
// DBPackage.
// DbPackage.
// Query().
// Count(ctx)
//
func (c *Client) Debug() *Client {
if c.debug {
return c
@@ -177,126 +119,87 @@ func (c *Client) Close() error {
// Use adds the mutation hooks to all the entity clients.
// In order to add hooks to a specific client, call: `client.Node.Use(...)`.
func (c *Client) Use(hooks ...Hook) {
c.DBPackage.Use(hooks...)
c.DbPackage.Use(hooks...)
}
// Intercept adds the query interceptors to all the entity clients.
// In order to add interceptors to a specific client, call: `client.Node.Intercept(...)`.
func (c *Client) Intercept(interceptors ...Interceptor) {
c.DBPackage.Intercept(interceptors...)
}
// Mutate implements the ent.Mutator interface.
func (c *Client) Mutate(ctx context.Context, m Mutation) (Value, error) {
switch m := m.(type) {
case *DBPackageMutation:
return c.DBPackage.mutate(ctx, m)
default:
return nil, fmt.Errorf("ent: unknown mutation type %T", m)
}
}
// DBPackageClient is a client for the DBPackage schema.
type DBPackageClient struct {
// DbPackageClient is a client for the DbPackage schema.
type DbPackageClient struct {
config
}
// NewDBPackageClient returns a client for the DBPackage from the given config.
func NewDBPackageClient(c config) *DBPackageClient {
return &DBPackageClient{config: c}
// NewDbPackageClient returns a client for the DbPackage from the given config.
func NewDbPackageClient(c config) *DbPackageClient {
return &DbPackageClient{config: c}
}
// Use adds a list of mutation hooks to the hooks stack.
// A call to `Use(f, g, h)` equals to `dbpackage.Hooks(f(g(h())))`.
func (c *DBPackageClient) Use(hooks ...Hook) {
c.hooks.DBPackage = append(c.hooks.DBPackage, hooks...)
func (c *DbPackageClient) Use(hooks ...Hook) {
c.hooks.DbPackage = append(c.hooks.DbPackage, hooks...)
}
// Intercept adds a list of query interceptors to the interceptors stack.
// A call to `Intercept(f, g, h)` equals to `dbpackage.Intercept(f(g(h())))`.
func (c *DBPackageClient) Intercept(interceptors ...Interceptor) {
c.inters.DBPackage = append(c.inters.DBPackage, interceptors...)
// Create returns a create builder for DbPackage.
func (c *DbPackageClient) Create() *DbPackageCreate {
mutation := newDbPackageMutation(c.config, OpCreate)
return &DbPackageCreate{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// Create returns a builder for creating a DBPackage entity.
func (c *DBPackageClient) Create() *DBPackageCreate {
mutation := newDBPackageMutation(c.config, OpCreate)
return &DBPackageCreate{config: c.config, hooks: c.Hooks(), mutation: mutation}
// CreateBulk returns a builder for creating a bulk of DbPackage entities.
func (c *DbPackageClient) CreateBulk(builders ...*DbPackageCreate) *DbPackageCreateBulk {
return &DbPackageCreateBulk{config: c.config, builders: builders}
}
// CreateBulk returns a builder for creating a bulk of DBPackage entities.
func (c *DBPackageClient) CreateBulk(builders ...*DBPackageCreate) *DBPackageCreateBulk {
return &DBPackageCreateBulk{config: c.config, builders: builders}
}
// MapCreateBulk creates a bulk creation builder from the given slice. For each item in the slice, the function creates
// a builder and applies setFunc on it.
func (c *DBPackageClient) MapCreateBulk(slice any, setFunc func(*DBPackageCreate, int)) *DBPackageCreateBulk {
rv := reflect.ValueOf(slice)
if rv.Kind() != reflect.Slice {
return &DBPackageCreateBulk{err: fmt.Errorf("calling to DBPackageClient.MapCreateBulk with wrong type %T, need slice", slice)}
}
builders := make([]*DBPackageCreate, rv.Len())
for i := 0; i < rv.Len(); i++ {
builders[i] = c.Create()
setFunc(builders[i], i)
}
return &DBPackageCreateBulk{config: c.config, builders: builders}
}
// Update returns an update builder for DBPackage.
func (c *DBPackageClient) Update() *DBPackageUpdate {
mutation := newDBPackageMutation(c.config, OpUpdate)
return &DBPackageUpdate{config: c.config, hooks: c.Hooks(), mutation: mutation}
// Update returns an update builder for DbPackage.
func (c *DbPackageClient) Update() *DbPackageUpdate {
mutation := newDbPackageMutation(c.config, OpUpdate)
return &DbPackageUpdate{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// UpdateOne returns an update builder for the given entity.
func (c *DBPackageClient) UpdateOne(dp *DBPackage) *DBPackageUpdateOne {
mutation := newDBPackageMutation(c.config, OpUpdateOne, withDBPackage(dp))
return &DBPackageUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
func (c *DbPackageClient) UpdateOne(dp *DbPackage) *DbPackageUpdateOne {
mutation := newDbPackageMutation(c.config, OpUpdateOne, withDbPackage(dp))
return &DbPackageUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// UpdateOneID returns an update builder for the given id.
func (c *DBPackageClient) UpdateOneID(id int) *DBPackageUpdateOne {
mutation := newDBPackageMutation(c.config, OpUpdateOne, withDBPackageID(id))
return &DBPackageUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
func (c *DbPackageClient) UpdateOneID(id int) *DbPackageUpdateOne {
mutation := newDbPackageMutation(c.config, OpUpdateOne, withDbPackageID(id))
return &DbPackageUpdateOne{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// Delete returns a delete builder for DBPackage.
func (c *DBPackageClient) Delete() *DBPackageDelete {
mutation := newDBPackageMutation(c.config, OpDelete)
return &DBPackageDelete{config: c.config, hooks: c.Hooks(), mutation: mutation}
// Delete returns a delete builder for DbPackage.
func (c *DbPackageClient) Delete() *DbPackageDelete {
mutation := newDbPackageMutation(c.config, OpDelete)
return &DbPackageDelete{config: c.config, hooks: c.Hooks(), mutation: mutation}
}
// DeleteOne returns a builder for deleting the given entity.
func (c *DBPackageClient) DeleteOne(dp *DBPackage) *DBPackageDeleteOne {
// DeleteOne returns a delete builder for the given entity.
func (c *DbPackageClient) DeleteOne(dp *DbPackage) *DbPackageDeleteOne {
return c.DeleteOneID(dp.ID)
}
// DeleteOneID returns a builder for deleting the given entity by its id.
func (c *DBPackageClient) DeleteOneID(id int) *DBPackageDeleteOne {
// DeleteOneID returns a delete builder for the given id.
func (c *DbPackageClient) DeleteOneID(id int) *DbPackageDeleteOne {
builder := c.Delete().Where(dbpackage.ID(id))
builder.mutation.id = &id
builder.mutation.op = OpDeleteOne
return &DBPackageDeleteOne{builder}
return &DbPackageDeleteOne{builder}
}
// Query returns a query builder for DBPackage.
func (c *DBPackageClient) Query() *DBPackageQuery {
return &DBPackageQuery{
// Query returns a query builder for DbPackage.
func (c *DbPackageClient) Query() *DbPackageQuery {
return &DbPackageQuery{
config: c.config,
ctx: &QueryContext{Type: TypeDBPackage},
inters: c.Interceptors(),
}
}
// Get returns a DBPackage entity by its id.
func (c *DBPackageClient) Get(ctx context.Context, id int) (*DBPackage, error) {
// Get returns a DbPackage entity by its id.
func (c *DbPackageClient) Get(ctx context.Context, id int) (*DbPackage, error) {
return c.Query().Where(dbpackage.ID(id)).Only(ctx)
}
// GetX is like Get, but panics if an error occurs.
func (c *DBPackageClient) GetX(ctx context.Context, id int) *DBPackage {
func (c *DbPackageClient) GetX(ctx context.Context, id int) *DbPackage {
obj, err := c.Get(ctx, id)
if err != nil {
panic(err)
@@ -305,36 +208,6 @@ func (c *DBPackageClient) GetX(ctx context.Context, id int) *DBPackage {
}
// Hooks returns the client hooks.
func (c *DBPackageClient) Hooks() []Hook {
return c.hooks.DBPackage
func (c *DbPackageClient) Hooks() []Hook {
return c.hooks.DbPackage
}
// Interceptors returns the client interceptors.
func (c *DBPackageClient) Interceptors() []Interceptor {
return c.inters.DBPackage
}
func (c *DBPackageClient) mutate(ctx context.Context, m *DBPackageMutation) (Value, error) {
switch m.Op() {
case OpCreate:
return (&DBPackageCreate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpUpdate:
return (&DBPackageUpdate{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpUpdateOne:
return (&DBPackageUpdateOne{config: c.config, hooks: c.Hooks(), mutation: m}).Save(ctx)
case OpDelete, OpDeleteOne:
return (&DBPackageDelete{config: c.config, hooks: c.Hooks(), mutation: m}).Exec(ctx)
default:
return nil, fmt.Errorf("ent: unknown DBPackage mutation op: %q", m.Op())
}
}
// hooks and interceptors per client, for fast access.
type (
hooks struct {
DBPackage []ent.Hook
}
inters struct {
DBPackage []ent.Interceptor
}
)

59
ent/config.go Normal file
View File

@@ -0,0 +1,59 @@
// Code generated by entc, DO NOT EDIT.
package ent
import (
"entgo.io/ent"
"entgo.io/ent/dialect"
)
// Option function to configure the client.
type Option func(*config)
// Config is the configuration for the client and its builder.
type config struct {
// driver used for executing database requests.
driver dialect.Driver
// debug enable a debug logging.
debug bool
// log used for logging on debug mode.
log func(...interface{})
// hooks to execute on mutations.
hooks *hooks
}
// hooks per client, for fast access.
type hooks struct {
DbPackage []ent.Hook
}
// Options applies the options on the config object.
func (c *config) options(opts ...Option) {
for _, opt := range opts {
opt(c)
}
if c.debug {
c.driver = dialect.Debug(c.driver, c.log)
}
}
// Debug enables debug logging on the ent.Driver.
func Debug() Option {
return func(c *config) {
c.debug = true
}
}
// Log sets the logging function for debug mode.
func Log(fn func(...interface{})) Option {
return func(c *config) {
c.log = fn
}
}
// Driver configures the client driver.
func Driver(driver dialect.Driver) Option {
return func(c *config) {
c.driver = driver
}
}

33
ent/context.go Normal file
View File

@@ -0,0 +1,33 @@
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
)
type clientCtxKey struct{}
// FromContext returns a Client stored inside a context, or nil if there isn't one.
func FromContext(ctx context.Context) *Client {
c, _ := ctx.Value(clientCtxKey{}).(*Client)
return c
}
// NewContext returns a new context with the given Client attached.
func NewContext(parent context.Context, c *Client) context.Context {
return context.WithValue(parent, clientCtxKey{}, c)
}
type txCtxKey struct{}
// TxFromContext returns a Tx stored inside a context, or nil if there isn't one.
func TxFromContext(ctx context.Context) *Tx {
tx, _ := ctx.Value(txCtxKey{}).(*Tx)
return tx
}
// NewTxContext returns a new context with the given Tx attached.
func NewTxContext(parent context.Context, tx *Tx) context.Context {
return context.WithValue(parent, txCtxKey{}, tx)
}

View File

@@ -1,4 +1,4 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package ent
@@ -8,13 +8,12 @@ import (
"strings"
"time"
"entgo.io/ent"
"entgo.io/ent/dialect/sql"
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
"git.harting.dev/ALHP/ALHP.GO/ent/dbpackage"
)
// DBPackage is the model entity for the DBPackage schema.
type DBPackage struct {
// DbPackage is the model entity for the DbPackage schema.
type DbPackage struct {
config `json:"-"`
// ID of the ent.
ID int `json:"id,omitempty"`
@@ -38,6 +37,8 @@ type DBPackage struct {
BuildTimeStart time.Time `json:"build_time_start,omitempty"`
// Updated holds the value of the "updated" field.
Updated time.Time `json:"updated,omitempty"`
// Hash holds the value of the "hash" field.
Hash string `json:"hash,omitempty"`
// Lto holds the value of the "lto" field.
Lto dbpackage.Lto `json:"lto,omitempty"`
// LastVersionBuild holds the value of the "last_version_build" field.
@@ -56,34 +57,31 @@ type DBPackage struct {
IoIn *int64 `json:"io_in,omitempty"`
// IoOut holds the value of the "io_out" field.
IoOut *int64 `json:"io_out,omitempty"`
// TagRev holds the value of the "tag_rev" field.
TagRev *string `json:"tag_rev,omitempty"`
selectValues sql.SelectValues
}
// scanValues returns the types for scanning values from sql.Rows.
func (*DBPackage) scanValues(columns []string) ([]any, error) {
values := make([]any, len(columns))
func (*DbPackage) scanValues(columns []string) ([]interface{}, error) {
values := make([]interface{}, len(columns))
for i := range columns {
switch columns[i] {
case dbpackage.FieldPackages:
values[i] = new([]byte)
case dbpackage.FieldID, dbpackage.FieldMaxRss, dbpackage.FieldUTime, dbpackage.FieldSTime, dbpackage.FieldIoIn, dbpackage.FieldIoOut:
values[i] = new(sql.NullInt64)
case dbpackage.FieldPkgbase, dbpackage.FieldStatus, dbpackage.FieldSkipReason, dbpackage.FieldRepository, dbpackage.FieldMarch, dbpackage.FieldVersion, dbpackage.FieldRepoVersion, dbpackage.FieldLto, dbpackage.FieldLastVersionBuild, dbpackage.FieldDebugSymbols, dbpackage.FieldTagRev:
case dbpackage.FieldPkgbase, dbpackage.FieldStatus, dbpackage.FieldSkipReason, dbpackage.FieldRepository, dbpackage.FieldMarch, dbpackage.FieldVersion, dbpackage.FieldRepoVersion, dbpackage.FieldHash, dbpackage.FieldLto, dbpackage.FieldLastVersionBuild, dbpackage.FieldDebugSymbols:
values[i] = new(sql.NullString)
case dbpackage.FieldBuildTimeStart, dbpackage.FieldUpdated, dbpackage.FieldLastVerified:
values[i] = new(sql.NullTime)
default:
values[i] = new(sql.UnknownType)
return nil, fmt.Errorf("unexpected column %q for type DbPackage", columns[i])
}
}
return values, nil
}
// assignValues assigns the values that were returned from sql.Rows (after scanning)
// to the DBPackage fields.
func (dp *DBPackage) assignValues(columns []string, values []any) error {
// to the DbPackage fields.
func (dp *DbPackage) assignValues(columns []string, values []interface{}) error {
if m, n := len(values), len(columns); m < n {
return fmt.Errorf("mismatch number of scan values: %d != %d", m, n)
}
@@ -157,6 +155,12 @@ func (dp *DBPackage) assignValues(columns []string, values []any) error {
} else if value.Valid {
dp.Updated = value.Time
}
case dbpackage.FieldHash:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field hash", values[i])
} else if value.Valid {
dp.Hash = value.String
}
case dbpackage.FieldLto:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field lto", values[i])
@@ -216,123 +220,93 @@ func (dp *DBPackage) assignValues(columns []string, values []any) error {
dp.IoOut = new(int64)
*dp.IoOut = value.Int64
}
case dbpackage.FieldTagRev:
if value, ok := values[i].(*sql.NullString); !ok {
return fmt.Errorf("unexpected type %T for field tag_rev", values[i])
} else if value.Valid {
dp.TagRev = new(string)
*dp.TagRev = value.String
}
default:
dp.selectValues.Set(columns[i], values[i])
}
}
return nil
}
// Value returns the ent.Value that was dynamically selected and assigned to the DBPackage.
// This includes values selected through modifiers, order, etc.
func (dp *DBPackage) Value(name string) (ent.Value, error) {
return dp.selectValues.Get(name)
}
// Update returns a builder for updating this DBPackage.
// Note that you need to call DBPackage.Unwrap() before calling this method if this DBPackage
// Update returns a builder for updating this DbPackage.
// Note that you need to call DbPackage.Unwrap() before calling this method if this DbPackage
// was returned from a transaction, and the transaction was committed or rolled back.
func (dp *DBPackage) Update() *DBPackageUpdateOne {
return NewDBPackageClient(dp.config).UpdateOne(dp)
func (dp *DbPackage) Update() *DbPackageUpdateOne {
return (&DbPackageClient{config: dp.config}).UpdateOne(dp)
}
// Unwrap unwraps the DBPackage entity that was returned from a transaction after it was closed,
// Unwrap unwraps the DbPackage entity that was returned from a transaction after it was closed,
// so that all future queries will be executed through the driver which created the transaction.
func (dp *DBPackage) Unwrap() *DBPackage {
_tx, ok := dp.config.driver.(*txDriver)
func (dp *DbPackage) Unwrap() *DbPackage {
tx, ok := dp.config.driver.(*txDriver)
if !ok {
panic("ent: DBPackage is not a transactional entity")
panic("ent: DbPackage is not a transactional entity")
}
dp.config.driver = _tx.drv
dp.config.driver = tx.drv
return dp
}
// String implements the fmt.Stringer.
func (dp *DBPackage) String() string {
func (dp *DbPackage) String() string {
var builder strings.Builder
builder.WriteString("DBPackage(")
builder.WriteString(fmt.Sprintf("id=%v, ", dp.ID))
builder.WriteString("pkgbase=")
builder.WriteString("DbPackage(")
builder.WriteString(fmt.Sprintf("id=%v", dp.ID))
builder.WriteString(", pkgbase=")
builder.WriteString(dp.Pkgbase)
builder.WriteString(", ")
builder.WriteString("packages=")
builder.WriteString(", packages=")
builder.WriteString(fmt.Sprintf("%v", dp.Packages))
builder.WriteString(", ")
builder.WriteString("status=")
builder.WriteString(", status=")
builder.WriteString(fmt.Sprintf("%v", dp.Status))
builder.WriteString(", ")
builder.WriteString("skip_reason=")
builder.WriteString(", skip_reason=")
builder.WriteString(dp.SkipReason)
builder.WriteString(", ")
builder.WriteString("repository=")
builder.WriteString(", repository=")
builder.WriteString(fmt.Sprintf("%v", dp.Repository))
builder.WriteString(", ")
builder.WriteString("march=")
builder.WriteString(", march=")
builder.WriteString(dp.March)
builder.WriteString(", ")
builder.WriteString("version=")
builder.WriteString(", version=")
builder.WriteString(dp.Version)
builder.WriteString(", ")
builder.WriteString("repo_version=")
builder.WriteString(", repo_version=")
builder.WriteString(dp.RepoVersion)
builder.WriteString(", ")
builder.WriteString("build_time_start=")
builder.WriteString(", build_time_start=")
builder.WriteString(dp.BuildTimeStart.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("updated=")
builder.WriteString(", updated=")
builder.WriteString(dp.Updated.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("lto=")
builder.WriteString(", hash=")
builder.WriteString(dp.Hash)
builder.WriteString(", lto=")
builder.WriteString(fmt.Sprintf("%v", dp.Lto))
builder.WriteString(", ")
builder.WriteString("last_version_build=")
builder.WriteString(", last_version_build=")
builder.WriteString(dp.LastVersionBuild)
builder.WriteString(", ")
builder.WriteString("last_verified=")
builder.WriteString(", last_verified=")
builder.WriteString(dp.LastVerified.Format(time.ANSIC))
builder.WriteString(", ")
builder.WriteString("debug_symbols=")
builder.WriteString(", debug_symbols=")
builder.WriteString(fmt.Sprintf("%v", dp.DebugSymbols))
builder.WriteString(", ")
if v := dp.MaxRss; v != nil {
builder.WriteString("max_rss=")
builder.WriteString(", max_rss=")
builder.WriteString(fmt.Sprintf("%v", *v))
}
builder.WriteString(", ")
if v := dp.UTime; v != nil {
builder.WriteString("u_time=")
builder.WriteString(", u_time=")
builder.WriteString(fmt.Sprintf("%v", *v))
}
builder.WriteString(", ")
if v := dp.STime; v != nil {
builder.WriteString("s_time=")
builder.WriteString(", s_time=")
builder.WriteString(fmt.Sprintf("%v", *v))
}
builder.WriteString(", ")
if v := dp.IoIn; v != nil {
builder.WriteString("io_in=")
builder.WriteString(", io_in=")
builder.WriteString(fmt.Sprintf("%v", *v))
}
builder.WriteString(", ")
if v := dp.IoOut; v != nil {
builder.WriteString("io_out=")
builder.WriteString(", io_out=")
builder.WriteString(fmt.Sprintf("%v", *v))
}
builder.WriteString(", ")
if v := dp.TagRev; v != nil {
builder.WriteString("tag_rev=")
builder.WriteString(*v)
}
builder.WriteByte(')')
return builder.String()
}
// DBPackages is a parsable slice of DBPackage.
type DBPackages []*DBPackage
// DbPackages is a parsable slice of DbPackage.
type DbPackages []*DbPackage
func (dp DbPackages) config(cfg config) {
for _i := range dp {
dp[_i].config = cfg
}
}

View File

@@ -1,11 +1,9 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package dbpackage
import (
"fmt"
"entgo.io/ent/dialect/sql"
)
const (
@@ -33,6 +31,8 @@ const (
FieldBuildTimeStart = "build_time_start"
// FieldUpdated holds the string denoting the updated field in the database.
FieldUpdated = "updated"
// FieldHash holds the string denoting the hash field in the database.
FieldHash = "hash"
// FieldLto holds the string denoting the lto field in the database.
FieldLto = "lto"
// FieldLastVersionBuild holds the string denoting the last_version_build field in the database.
@@ -51,8 +51,6 @@ const (
FieldIoIn = "io_in"
// FieldIoOut holds the string denoting the io_out field in the database.
FieldIoOut = "io_out"
// FieldTagRev holds the string denoting the tag_rev field in the database.
FieldTagRev = "tag_rev"
// Table holds the table name of the dbpackage in the database.
Table = "db_packages"
)
@@ -70,6 +68,7 @@ var Columns = []string{
FieldRepoVersion,
FieldBuildTimeStart,
FieldUpdated,
FieldHash,
FieldLto,
FieldLastVersionBuild,
FieldLastVerified,
@@ -79,7 +78,6 @@ var Columns = []string{
FieldSTime,
FieldIoIn,
FieldIoOut,
FieldTagRev,
}
// ValidColumn reports if the column name is valid (part of the table columns).
@@ -109,9 +107,8 @@ const DefaultStatus = StatusUnknown
const (
StatusSkipped Status = "skipped"
StatusFailed Status = "failed"
StatusBuilt Status = "built"
StatusBuild Status = "build"
StatusQueued Status = "queued"
StatusDelayed Status = "delayed"
StatusBuilding Status = "building"
StatusLatest Status = "latest"
StatusSigning Status = "signing"
@@ -125,7 +122,7 @@ func (s Status) String() string {
// StatusValidator is a validator for the "status" field enum values. It is called by the builders before save.
func StatusValidator(s Status) error {
switch s {
case StatusSkipped, StatusFailed, StatusBuilt, StatusQueued, StatusDelayed, StatusBuilding, StatusLatest, StatusSigning, StatusUnknown:
case StatusSkipped, StatusFailed, StatusBuild, StatusQueued, StatusBuilding, StatusLatest, StatusSigning, StatusUnknown:
return nil
default:
return fmt.Errorf("dbpackage: invalid enum value for status field: %q", s)
@@ -137,9 +134,9 @@ type Repository string
// Repository values.
const (
RepositoryExtra Repository = "extra"
RepositoryCore Repository = "core"
RepositoryMultilib Repository = "multilib"
RepositoryExtra Repository = "extra"
RepositoryCore Repository = "core"
RepositoryCommunity Repository = "community"
)
func (r Repository) String() string {
@@ -149,7 +146,7 @@ func (r Repository) String() string {
// RepositoryValidator is a validator for the "repository" field enum values. It is called by the builders before save.
func RepositoryValidator(r Repository) error {
switch r {
case RepositoryExtra, RepositoryCore, RepositoryMultilib:
case RepositoryExtra, RepositoryCore, RepositoryCommunity:
return nil
default:
return fmt.Errorf("dbpackage: invalid enum value for repository field: %q", r)
@@ -210,106 +207,3 @@ func DebugSymbolsValidator(ds DebugSymbols) error {
return fmt.Errorf("dbpackage: invalid enum value for debug_symbols field: %q", ds)
}
}
// OrderOption defines the ordering options for the DBPackage queries.
type OrderOption func(*sql.Selector)
// ByID orders the results by the id field.
func ByID(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldID, opts...).ToFunc()
}
// ByPkgbase orders the results by the pkgbase field.
func ByPkgbase(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldPkgbase, opts...).ToFunc()
}
// ByStatus orders the results by the status field.
func ByStatus(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldStatus, opts...).ToFunc()
}
// BySkipReason orders the results by the skip_reason field.
func BySkipReason(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldSkipReason, opts...).ToFunc()
}
// ByRepository orders the results by the repository field.
func ByRepository(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldRepository, opts...).ToFunc()
}
// ByMarch orders the results by the march field.
func ByMarch(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldMarch, opts...).ToFunc()
}
// ByVersion orders the results by the version field.
func ByVersion(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldVersion, opts...).ToFunc()
}
// ByRepoVersion orders the results by the repo_version field.
func ByRepoVersion(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldRepoVersion, opts...).ToFunc()
}
// ByBuildTimeStart orders the results by the build_time_start field.
func ByBuildTimeStart(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldBuildTimeStart, opts...).ToFunc()
}
// ByUpdated orders the results by the updated field.
func ByUpdated(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldUpdated, opts...).ToFunc()
}
// ByLto orders the results by the lto field.
func ByLto(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldLto, opts...).ToFunc()
}
// ByLastVersionBuild orders the results by the last_version_build field.
func ByLastVersionBuild(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldLastVersionBuild, opts...).ToFunc()
}
// ByLastVerified orders the results by the last_verified field.
func ByLastVerified(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldLastVerified, opts...).ToFunc()
}
// ByDebugSymbols orders the results by the debug_symbols field.
func ByDebugSymbols(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldDebugSymbols, opts...).ToFunc()
}
// ByMaxRss orders the results by the max_rss field.
func ByMaxRss(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldMaxRss, opts...).ToFunc()
}
// ByUTime orders the results by the u_time field.
func ByUTime(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldUTime, opts...).ToFunc()
}
// BySTime orders the results by the s_time field.
func BySTime(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldSTime, opts...).ToFunc()
}
// ByIoIn orders the results by the io_in field.
func ByIoIn(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldIoIn, opts...).ToFunc()
}
// ByIoOut orders the results by the io_out field.
func ByIoOut(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldIoOut, opts...).ToFunc()
}
// ByTagRev orders the results by the tag_rev field.
func ByTagRev(opts ...sql.OrderTermOption) OrderOption {
return sql.OrderByField(FieldTagRev, opts...).ToFunc()
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package ent
@@ -10,36 +10,36 @@ import (
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
"git.harting.dev/ALHP/ALHP.GO/ent/dbpackage"
)
// DBPackageCreate is the builder for creating a DBPackage entity.
type DBPackageCreate struct {
// DbPackageCreate is the builder for creating a DbPackage entity.
type DbPackageCreate struct {
config
mutation *DBPackageMutation
mutation *DbPackageMutation
hooks []Hook
}
// SetPkgbase sets the "pkgbase" field.
func (dpc *DBPackageCreate) SetPkgbase(s string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetPkgbase(s string) *DbPackageCreate {
dpc.mutation.SetPkgbase(s)
return dpc
}
// SetPackages sets the "packages" field.
func (dpc *DBPackageCreate) SetPackages(s []string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetPackages(s []string) *DbPackageCreate {
dpc.mutation.SetPackages(s)
return dpc
}
// SetStatus sets the "status" field.
func (dpc *DBPackageCreate) SetStatus(d dbpackage.Status) *DBPackageCreate {
func (dpc *DbPackageCreate) SetStatus(d dbpackage.Status) *DbPackageCreate {
dpc.mutation.SetStatus(d)
return dpc
}
// SetNillableStatus sets the "status" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableStatus(d *dbpackage.Status) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableStatus(d *dbpackage.Status) *DbPackageCreate {
if d != nil {
dpc.SetStatus(*d)
}
@@ -47,13 +47,13 @@ func (dpc *DBPackageCreate) SetNillableStatus(d *dbpackage.Status) *DBPackageCre
}
// SetSkipReason sets the "skip_reason" field.
func (dpc *DBPackageCreate) SetSkipReason(s string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetSkipReason(s string) *DbPackageCreate {
dpc.mutation.SetSkipReason(s)
return dpc
}
// SetNillableSkipReason sets the "skip_reason" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableSkipReason(s *string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableSkipReason(s *string) *DbPackageCreate {
if s != nil {
dpc.SetSkipReason(*s)
}
@@ -61,25 +61,25 @@ func (dpc *DBPackageCreate) SetNillableSkipReason(s *string) *DBPackageCreate {
}
// SetRepository sets the "repository" field.
func (dpc *DBPackageCreate) SetRepository(d dbpackage.Repository) *DBPackageCreate {
func (dpc *DbPackageCreate) SetRepository(d dbpackage.Repository) *DbPackageCreate {
dpc.mutation.SetRepository(d)
return dpc
}
// SetMarch sets the "march" field.
func (dpc *DBPackageCreate) SetMarch(s string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetMarch(s string) *DbPackageCreate {
dpc.mutation.SetMarch(s)
return dpc
}
// SetVersion sets the "version" field.
func (dpc *DBPackageCreate) SetVersion(s string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetVersion(s string) *DbPackageCreate {
dpc.mutation.SetVersion(s)
return dpc
}
// SetNillableVersion sets the "version" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableVersion(s *string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableVersion(s *string) *DbPackageCreate {
if s != nil {
dpc.SetVersion(*s)
}
@@ -87,13 +87,13 @@ func (dpc *DBPackageCreate) SetNillableVersion(s *string) *DBPackageCreate {
}
// SetRepoVersion sets the "repo_version" field.
func (dpc *DBPackageCreate) SetRepoVersion(s string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetRepoVersion(s string) *DbPackageCreate {
dpc.mutation.SetRepoVersion(s)
return dpc
}
// SetNillableRepoVersion sets the "repo_version" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableRepoVersion(s *string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableRepoVersion(s *string) *DbPackageCreate {
if s != nil {
dpc.SetRepoVersion(*s)
}
@@ -101,13 +101,13 @@ func (dpc *DBPackageCreate) SetNillableRepoVersion(s *string) *DBPackageCreate {
}
// SetBuildTimeStart sets the "build_time_start" field.
func (dpc *DBPackageCreate) SetBuildTimeStart(t time.Time) *DBPackageCreate {
func (dpc *DbPackageCreate) SetBuildTimeStart(t time.Time) *DbPackageCreate {
dpc.mutation.SetBuildTimeStart(t)
return dpc
}
// SetNillableBuildTimeStart sets the "build_time_start" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableBuildTimeStart(t *time.Time) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableBuildTimeStart(t *time.Time) *DbPackageCreate {
if t != nil {
dpc.SetBuildTimeStart(*t)
}
@@ -115,27 +115,41 @@ func (dpc *DBPackageCreate) SetNillableBuildTimeStart(t *time.Time) *DBPackageCr
}
// SetUpdated sets the "updated" field.
func (dpc *DBPackageCreate) SetUpdated(t time.Time) *DBPackageCreate {
func (dpc *DbPackageCreate) SetUpdated(t time.Time) *DbPackageCreate {
dpc.mutation.SetUpdated(t)
return dpc
}
// SetNillableUpdated sets the "updated" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableUpdated(t *time.Time) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableUpdated(t *time.Time) *DbPackageCreate {
if t != nil {
dpc.SetUpdated(*t)
}
return dpc
}
// SetHash sets the "hash" field.
func (dpc *DbPackageCreate) SetHash(s string) *DbPackageCreate {
dpc.mutation.SetHash(s)
return dpc
}
// SetNillableHash sets the "hash" field if the given value is not nil.
func (dpc *DbPackageCreate) SetNillableHash(s *string) *DbPackageCreate {
if s != nil {
dpc.SetHash(*s)
}
return dpc
}
// SetLto sets the "lto" field.
func (dpc *DBPackageCreate) SetLto(d dbpackage.Lto) *DBPackageCreate {
func (dpc *DbPackageCreate) SetLto(d dbpackage.Lto) *DbPackageCreate {
dpc.mutation.SetLto(d)
return dpc
}
// SetNillableLto sets the "lto" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableLto(d *dbpackage.Lto) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableLto(d *dbpackage.Lto) *DbPackageCreate {
if d != nil {
dpc.SetLto(*d)
}
@@ -143,13 +157,13 @@ func (dpc *DBPackageCreate) SetNillableLto(d *dbpackage.Lto) *DBPackageCreate {
}
// SetLastVersionBuild sets the "last_version_build" field.
func (dpc *DBPackageCreate) SetLastVersionBuild(s string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetLastVersionBuild(s string) *DbPackageCreate {
dpc.mutation.SetLastVersionBuild(s)
return dpc
}
// SetNillableLastVersionBuild sets the "last_version_build" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableLastVersionBuild(s *string) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableLastVersionBuild(s *string) *DbPackageCreate {
if s != nil {
dpc.SetLastVersionBuild(*s)
}
@@ -157,13 +171,13 @@ func (dpc *DBPackageCreate) SetNillableLastVersionBuild(s *string) *DBPackageCre
}
// SetLastVerified sets the "last_verified" field.
func (dpc *DBPackageCreate) SetLastVerified(t time.Time) *DBPackageCreate {
func (dpc *DbPackageCreate) SetLastVerified(t time.Time) *DbPackageCreate {
dpc.mutation.SetLastVerified(t)
return dpc
}
// SetNillableLastVerified sets the "last_verified" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableLastVerified(t *time.Time) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableLastVerified(t *time.Time) *DbPackageCreate {
if t != nil {
dpc.SetLastVerified(*t)
}
@@ -171,13 +185,13 @@ func (dpc *DBPackageCreate) SetNillableLastVerified(t *time.Time) *DBPackageCrea
}
// SetDebugSymbols sets the "debug_symbols" field.
func (dpc *DBPackageCreate) SetDebugSymbols(ds dbpackage.DebugSymbols) *DBPackageCreate {
func (dpc *DbPackageCreate) SetDebugSymbols(ds dbpackage.DebugSymbols) *DbPackageCreate {
dpc.mutation.SetDebugSymbols(ds)
return dpc
}
// SetNillableDebugSymbols sets the "debug_symbols" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableDebugSymbols(ds *dbpackage.DebugSymbols) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableDebugSymbols(ds *dbpackage.DebugSymbols) *DbPackageCreate {
if ds != nil {
dpc.SetDebugSymbols(*ds)
}
@@ -185,13 +199,13 @@ func (dpc *DBPackageCreate) SetNillableDebugSymbols(ds *dbpackage.DebugSymbols)
}
// SetMaxRss sets the "max_rss" field.
func (dpc *DBPackageCreate) SetMaxRss(i int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetMaxRss(i int64) *DbPackageCreate {
dpc.mutation.SetMaxRss(i)
return dpc
}
// SetNillableMaxRss sets the "max_rss" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableMaxRss(i *int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableMaxRss(i *int64) *DbPackageCreate {
if i != nil {
dpc.SetMaxRss(*i)
}
@@ -199,13 +213,13 @@ func (dpc *DBPackageCreate) SetNillableMaxRss(i *int64) *DBPackageCreate {
}
// SetUTime sets the "u_time" field.
func (dpc *DBPackageCreate) SetUTime(i int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetUTime(i int64) *DbPackageCreate {
dpc.mutation.SetUTime(i)
return dpc
}
// SetNillableUTime sets the "u_time" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableUTime(i *int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableUTime(i *int64) *DbPackageCreate {
if i != nil {
dpc.SetUTime(*i)
}
@@ -213,13 +227,13 @@ func (dpc *DBPackageCreate) SetNillableUTime(i *int64) *DBPackageCreate {
}
// SetSTime sets the "s_time" field.
func (dpc *DBPackageCreate) SetSTime(i int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetSTime(i int64) *DbPackageCreate {
dpc.mutation.SetSTime(i)
return dpc
}
// SetNillableSTime sets the "s_time" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableSTime(i *int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableSTime(i *int64) *DbPackageCreate {
if i != nil {
dpc.SetSTime(*i)
}
@@ -227,13 +241,13 @@ func (dpc *DBPackageCreate) SetNillableSTime(i *int64) *DBPackageCreate {
}
// SetIoIn sets the "io_in" field.
func (dpc *DBPackageCreate) SetIoIn(i int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetIoIn(i int64) *DbPackageCreate {
dpc.mutation.SetIoIn(i)
return dpc
}
// SetNillableIoIn sets the "io_in" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableIoIn(i *int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableIoIn(i *int64) *DbPackageCreate {
if i != nil {
dpc.SetIoIn(*i)
}
@@ -241,46 +255,68 @@ func (dpc *DBPackageCreate) SetNillableIoIn(i *int64) *DBPackageCreate {
}
// SetIoOut sets the "io_out" field.
func (dpc *DBPackageCreate) SetIoOut(i int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetIoOut(i int64) *DbPackageCreate {
dpc.mutation.SetIoOut(i)
return dpc
}
// SetNillableIoOut sets the "io_out" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableIoOut(i *int64) *DBPackageCreate {
func (dpc *DbPackageCreate) SetNillableIoOut(i *int64) *DbPackageCreate {
if i != nil {
dpc.SetIoOut(*i)
}
return dpc
}
// SetTagRev sets the "tag_rev" field.
func (dpc *DBPackageCreate) SetTagRev(s string) *DBPackageCreate {
dpc.mutation.SetTagRev(s)
return dpc
}
// SetNillableTagRev sets the "tag_rev" field if the given value is not nil.
func (dpc *DBPackageCreate) SetNillableTagRev(s *string) *DBPackageCreate {
if s != nil {
dpc.SetTagRev(*s)
}
return dpc
}
// Mutation returns the DBPackageMutation object of the builder.
func (dpc *DBPackageCreate) Mutation() *DBPackageMutation {
// Mutation returns the DbPackageMutation object of the builder.
func (dpc *DbPackageCreate) Mutation() *DbPackageMutation {
return dpc.mutation
}
// Save creates the DBPackage in the database.
func (dpc *DBPackageCreate) Save(ctx context.Context) (*DBPackage, error) {
// Save creates the DbPackage in the database.
func (dpc *DbPackageCreate) Save(ctx context.Context) (*DbPackage, error) {
var (
err error
node *DbPackage
)
dpc.defaults()
return withHooks(ctx, dpc.sqlSave, dpc.mutation, dpc.hooks)
if len(dpc.hooks) == 0 {
if err = dpc.check(); err != nil {
return nil, err
}
node, err = dpc.sqlSave(ctx)
} else {
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*DbPackageMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T", m)
}
if err = dpc.check(); err != nil {
return nil, err
}
dpc.mutation = mutation
if node, err = dpc.sqlSave(ctx); err != nil {
return nil, err
}
mutation.id = &node.ID
mutation.done = true
return node, err
})
for i := len(dpc.hooks) - 1; i >= 0; i-- {
if dpc.hooks[i] == nil {
return nil, fmt.Errorf("ent: uninitialized hook (forgotten import ent/runtime?)")
}
mut = dpc.hooks[i](mut)
}
if _, err := mut.Mutate(ctx, dpc.mutation); err != nil {
return nil, err
}
}
return node, err
}
// SaveX calls Save and panics if Save returns an error.
func (dpc *DBPackageCreate) SaveX(ctx context.Context) *DBPackage {
func (dpc *DbPackageCreate) SaveX(ctx context.Context) *DbPackage {
v, err := dpc.Save(ctx)
if err != nil {
panic(err)
@@ -289,20 +325,20 @@ func (dpc *DBPackageCreate) SaveX(ctx context.Context) *DBPackage {
}
// Exec executes the query.
func (dpc *DBPackageCreate) Exec(ctx context.Context) error {
func (dpc *DbPackageCreate) Exec(ctx context.Context) error {
_, err := dpc.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (dpc *DBPackageCreate) ExecX(ctx context.Context) {
func (dpc *DbPackageCreate) ExecX(ctx context.Context) {
if err := dpc.Exec(ctx); err != nil {
panic(err)
}
}
// defaults sets the default values of the builder before save.
func (dpc *DBPackageCreate) defaults() {
func (dpc *DbPackageCreate) defaults() {
if _, ok := dpc.mutation.Status(); !ok {
v := dbpackage.DefaultStatus
dpc.mutation.SetStatus(v)
@@ -318,176 +354,253 @@ func (dpc *DBPackageCreate) defaults() {
}
// check runs all checks and user-defined validators on the builder.
func (dpc *DBPackageCreate) check() error {
func (dpc *DbPackageCreate) check() error {
if _, ok := dpc.mutation.Pkgbase(); !ok {
return &ValidationError{Name: "pkgbase", err: errors.New(`ent: missing required field "DBPackage.pkgbase"`)}
return &ValidationError{Name: "pkgbase", err: errors.New(`ent: missing required field "DbPackage.pkgbase"`)}
}
if v, ok := dpc.mutation.Pkgbase(); ok {
if err := dbpackage.PkgbaseValidator(v); err != nil {
return &ValidationError{Name: "pkgbase", err: fmt.Errorf(`ent: validator failed for field "DBPackage.pkgbase": %w`, err)}
return &ValidationError{Name: "pkgbase", err: fmt.Errorf(`ent: validator failed for field "DbPackage.pkgbase": %w`, err)}
}
}
if v, ok := dpc.mutation.Status(); ok {
if err := dbpackage.StatusValidator(v); err != nil {
return &ValidationError{Name: "status", err: fmt.Errorf(`ent: validator failed for field "DBPackage.status": %w`, err)}
return &ValidationError{Name: "status", err: fmt.Errorf(`ent: validator failed for field "DbPackage.status": %w`, err)}
}
}
if _, ok := dpc.mutation.Repository(); !ok {
return &ValidationError{Name: "repository", err: errors.New(`ent: missing required field "DBPackage.repository"`)}
return &ValidationError{Name: "repository", err: errors.New(`ent: missing required field "DbPackage.repository"`)}
}
if v, ok := dpc.mutation.Repository(); ok {
if err := dbpackage.RepositoryValidator(v); err != nil {
return &ValidationError{Name: "repository", err: fmt.Errorf(`ent: validator failed for field "DBPackage.repository": %w`, err)}
return &ValidationError{Name: "repository", err: fmt.Errorf(`ent: validator failed for field "DbPackage.repository": %w`, err)}
}
}
if _, ok := dpc.mutation.March(); !ok {
return &ValidationError{Name: "march", err: errors.New(`ent: missing required field "DBPackage.march"`)}
return &ValidationError{Name: "march", err: errors.New(`ent: missing required field "DbPackage.march"`)}
}
if v, ok := dpc.mutation.March(); ok {
if err := dbpackage.MarchValidator(v); err != nil {
return &ValidationError{Name: "march", err: fmt.Errorf(`ent: validator failed for field "DBPackage.march": %w`, err)}
return &ValidationError{Name: "march", err: fmt.Errorf(`ent: validator failed for field "DbPackage.march": %w`, err)}
}
}
if v, ok := dpc.mutation.Lto(); ok {
if err := dbpackage.LtoValidator(v); err != nil {
return &ValidationError{Name: "lto", err: fmt.Errorf(`ent: validator failed for field "DBPackage.lto": %w`, err)}
return &ValidationError{Name: "lto", err: fmt.Errorf(`ent: validator failed for field "DbPackage.lto": %w`, err)}
}
}
if v, ok := dpc.mutation.DebugSymbols(); ok {
if err := dbpackage.DebugSymbolsValidator(v); err != nil {
return &ValidationError{Name: "debug_symbols", err: fmt.Errorf(`ent: validator failed for field "DBPackage.debug_symbols": %w`, err)}
return &ValidationError{Name: "debug_symbols", err: fmt.Errorf(`ent: validator failed for field "DbPackage.debug_symbols": %w`, err)}
}
}
return nil
}
func (dpc *DBPackageCreate) sqlSave(ctx context.Context) (*DBPackage, error) {
if err := dpc.check(); err != nil {
return nil, err
}
func (dpc *DbPackageCreate) sqlSave(ctx context.Context) (*DbPackage, error) {
_node, _spec := dpc.createSpec()
if err := sqlgraph.CreateNode(ctx, dpc.driver, _spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
err = &ConstraintError{err.Error(), err}
}
return nil, err
}
id := _spec.ID.Value.(int64)
_node.ID = int(id)
dpc.mutation.id = &_node.ID
dpc.mutation.done = true
return _node, nil
}
func (dpc *DBPackageCreate) createSpec() (*DBPackage, *sqlgraph.CreateSpec) {
func (dpc *DbPackageCreate) createSpec() (*DbPackage, *sqlgraph.CreateSpec) {
var (
_node = &DBPackage{config: dpc.config}
_spec = sqlgraph.NewCreateSpec(dbpackage.Table, sqlgraph.NewFieldSpec(dbpackage.FieldID, field.TypeInt))
_node = &DbPackage{config: dpc.config}
_spec = &sqlgraph.CreateSpec{
Table: dbpackage.Table,
ID: &sqlgraph.FieldSpec{
Type: field.TypeInt,
Column: dbpackage.FieldID,
},
}
)
if value, ok := dpc.mutation.Pkgbase(); ok {
_spec.SetField(dbpackage.FieldPkgbase, field.TypeString, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: dbpackage.FieldPkgbase,
})
_node.Pkgbase = value
}
if value, ok := dpc.mutation.Packages(); ok {
_spec.SetField(dbpackage.FieldPackages, field.TypeJSON, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeJSON,
Value: value,
Column: dbpackage.FieldPackages,
})
_node.Packages = value
}
if value, ok := dpc.mutation.Status(); ok {
_spec.SetField(dbpackage.FieldStatus, field.TypeEnum, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeEnum,
Value: value,
Column: dbpackage.FieldStatus,
})
_node.Status = value
}
if value, ok := dpc.mutation.SkipReason(); ok {
_spec.SetField(dbpackage.FieldSkipReason, field.TypeString, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: dbpackage.FieldSkipReason,
})
_node.SkipReason = value
}
if value, ok := dpc.mutation.Repository(); ok {
_spec.SetField(dbpackage.FieldRepository, field.TypeEnum, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeEnum,
Value: value,
Column: dbpackage.FieldRepository,
})
_node.Repository = value
}
if value, ok := dpc.mutation.March(); ok {
_spec.SetField(dbpackage.FieldMarch, field.TypeString, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: dbpackage.FieldMarch,
})
_node.March = value
}
if value, ok := dpc.mutation.Version(); ok {
_spec.SetField(dbpackage.FieldVersion, field.TypeString, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: dbpackage.FieldVersion,
})
_node.Version = value
}
if value, ok := dpc.mutation.RepoVersion(); ok {
_spec.SetField(dbpackage.FieldRepoVersion, field.TypeString, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: dbpackage.FieldRepoVersion,
})
_node.RepoVersion = value
}
if value, ok := dpc.mutation.BuildTimeStart(); ok {
_spec.SetField(dbpackage.FieldBuildTimeStart, field.TypeTime, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeTime,
Value: value,
Column: dbpackage.FieldBuildTimeStart,
})
_node.BuildTimeStart = value
}
if value, ok := dpc.mutation.Updated(); ok {
_spec.SetField(dbpackage.FieldUpdated, field.TypeTime, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeTime,
Value: value,
Column: dbpackage.FieldUpdated,
})
_node.Updated = value
}
if value, ok := dpc.mutation.Hash(); ok {
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: dbpackage.FieldHash,
})
_node.Hash = value
}
if value, ok := dpc.mutation.Lto(); ok {
_spec.SetField(dbpackage.FieldLto, field.TypeEnum, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeEnum,
Value: value,
Column: dbpackage.FieldLto,
})
_node.Lto = value
}
if value, ok := dpc.mutation.LastVersionBuild(); ok {
_spec.SetField(dbpackage.FieldLastVersionBuild, field.TypeString, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeString,
Value: value,
Column: dbpackage.FieldLastVersionBuild,
})
_node.LastVersionBuild = value
}
if value, ok := dpc.mutation.LastVerified(); ok {
_spec.SetField(dbpackage.FieldLastVerified, field.TypeTime, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeTime,
Value: value,
Column: dbpackage.FieldLastVerified,
})
_node.LastVerified = value
}
if value, ok := dpc.mutation.DebugSymbols(); ok {
_spec.SetField(dbpackage.FieldDebugSymbols, field.TypeEnum, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeEnum,
Value: value,
Column: dbpackage.FieldDebugSymbols,
})
_node.DebugSymbols = value
}
if value, ok := dpc.mutation.MaxRss(); ok {
_spec.SetField(dbpackage.FieldMaxRss, field.TypeInt64, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: dbpackage.FieldMaxRss,
})
_node.MaxRss = &value
}
if value, ok := dpc.mutation.UTime(); ok {
_spec.SetField(dbpackage.FieldUTime, field.TypeInt64, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: dbpackage.FieldUTime,
})
_node.UTime = &value
}
if value, ok := dpc.mutation.STime(); ok {
_spec.SetField(dbpackage.FieldSTime, field.TypeInt64, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: dbpackage.FieldSTime,
})
_node.STime = &value
}
if value, ok := dpc.mutation.IoIn(); ok {
_spec.SetField(dbpackage.FieldIoIn, field.TypeInt64, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: dbpackage.FieldIoIn,
})
_node.IoIn = &value
}
if value, ok := dpc.mutation.IoOut(); ok {
_spec.SetField(dbpackage.FieldIoOut, field.TypeInt64, value)
_spec.Fields = append(_spec.Fields, &sqlgraph.FieldSpec{
Type: field.TypeInt64,
Value: value,
Column: dbpackage.FieldIoOut,
})
_node.IoOut = &value
}
if value, ok := dpc.mutation.TagRev(); ok {
_spec.SetField(dbpackage.FieldTagRev, field.TypeString, value)
_node.TagRev = &value
}
return _node, _spec
}
// DBPackageCreateBulk is the builder for creating many DBPackage entities in bulk.
type DBPackageCreateBulk struct {
// DbPackageCreateBulk is the builder for creating many DbPackage entities in bulk.
type DbPackageCreateBulk struct {
config
err error
builders []*DBPackageCreate
builders []*DbPackageCreate
}
// Save creates the DBPackage entities in the database.
func (dpcb *DBPackageCreateBulk) Save(ctx context.Context) ([]*DBPackage, error) {
if dpcb.err != nil {
return nil, dpcb.err
}
// Save creates the DbPackage entities in the database.
func (dpcb *DbPackageCreateBulk) Save(ctx context.Context) ([]*DbPackage, error) {
specs := make([]*sqlgraph.CreateSpec, len(dpcb.builders))
nodes := make([]*DBPackage, len(dpcb.builders))
nodes := make([]*DbPackage, len(dpcb.builders))
mutators := make([]Mutator, len(dpcb.builders))
for i := range dpcb.builders {
func(i int, root context.Context) {
builder := dpcb.builders[i]
builder.defaults()
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*DBPackageMutation)
mutation, ok := m.(*DbPackageMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T", m)
}
@@ -495,8 +608,8 @@ func (dpcb *DBPackageCreateBulk) Save(ctx context.Context) ([]*DBPackage, error)
return nil, err
}
builder.mutation = mutation
var err error
nodes[i], specs[i] = builder.createSpec()
var err error
if i < len(mutators)-1 {
_, err = mutators[i+1].Mutate(root, dpcb.builders[i+1].mutation)
} else {
@@ -504,7 +617,7 @@ func (dpcb *DBPackageCreateBulk) Save(ctx context.Context) ([]*DBPackage, error)
// Invoke the actual operation on the latest mutation in the chain.
if err = sqlgraph.BatchCreate(ctx, dpcb.driver, spec); err != nil {
if sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
err = &ConstraintError{err.Error(), err}
}
}
}
@@ -512,11 +625,11 @@ func (dpcb *DBPackageCreateBulk) Save(ctx context.Context) ([]*DBPackage, error)
return nil, err
}
mutation.id = &nodes[i].ID
mutation.done = true
if specs[i].ID.Value != nil {
id := specs[i].ID.Value.(int64)
nodes[i].ID = int(id)
}
mutation.done = true
return nodes[i], nil
})
for i := len(builder.hooks) - 1; i >= 0; i-- {
@@ -534,7 +647,7 @@ func (dpcb *DBPackageCreateBulk) Save(ctx context.Context) ([]*DBPackage, error)
}
// SaveX is like Save, but panics if an error occurs.
func (dpcb *DBPackageCreateBulk) SaveX(ctx context.Context) []*DBPackage {
func (dpcb *DbPackageCreateBulk) SaveX(ctx context.Context) []*DbPackage {
v, err := dpcb.Save(ctx)
if err != nil {
panic(err)
@@ -543,13 +656,13 @@ func (dpcb *DBPackageCreateBulk) SaveX(ctx context.Context) []*DBPackage {
}
// Exec executes the query.
func (dpcb *DBPackageCreateBulk) Exec(ctx context.Context) error {
func (dpcb *DbPackageCreateBulk) Exec(ctx context.Context) error {
_, err := dpcb.Save(ctx)
return err
}
// ExecX is like Exec, but panics if an error occurs.
func (dpcb *DBPackageCreateBulk) ExecX(ctx context.Context) {
func (dpcb *DbPackageCreateBulk) ExecX(ctx context.Context) {
if err := dpcb.Exec(ctx); err != nil {
panic(err)
}

View File

@@ -1,37 +1,65 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
"fmt"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"entgo.io/ent/schema/field"
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
"somegit.dev/ALHP/ALHP.GO/ent/predicate"
"git.harting.dev/ALHP/ALHP.GO/ent/dbpackage"
"git.harting.dev/ALHP/ALHP.GO/ent/predicate"
)
// DBPackageDelete is the builder for deleting a DBPackage entity.
type DBPackageDelete struct {
// DbPackageDelete is the builder for deleting a DbPackage entity.
type DbPackageDelete struct {
config
hooks []Hook
mutation *DBPackageMutation
mutation *DbPackageMutation
}
// Where appends a list predicates to the DBPackageDelete builder.
func (dpd *DBPackageDelete) Where(ps ...predicate.DBPackage) *DBPackageDelete {
// Where appends a list predicates to the DbPackageDelete builder.
func (dpd *DbPackageDelete) Where(ps ...predicate.DbPackage) *DbPackageDelete {
dpd.mutation.Where(ps...)
return dpd
}
// Exec executes the deletion query and returns how many vertices were deleted.
func (dpd *DBPackageDelete) Exec(ctx context.Context) (int, error) {
return withHooks(ctx, dpd.sqlExec, dpd.mutation, dpd.hooks)
func (dpd *DbPackageDelete) Exec(ctx context.Context) (int, error) {
var (
err error
affected int
)
if len(dpd.hooks) == 0 {
affected, err = dpd.sqlExec(ctx)
} else {
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutation, ok := m.(*DbPackageMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T", m)
}
dpd.mutation = mutation
affected, err = dpd.sqlExec(ctx)
mutation.done = true
return affected, err
})
for i := len(dpd.hooks) - 1; i >= 0; i-- {
if dpd.hooks[i] == nil {
return 0, fmt.Errorf("ent: uninitialized hook (forgotten import ent/runtime?)")
}
mut = dpd.hooks[i](mut)
}
if _, err := mut.Mutate(ctx, dpd.mutation); err != nil {
return 0, err
}
}
return affected, err
}
// ExecX is like Exec, but panics if an error occurs.
func (dpd *DBPackageDelete) ExecX(ctx context.Context) int {
func (dpd *DbPackageDelete) ExecX(ctx context.Context) int {
n, err := dpd.Exec(ctx)
if err != nil {
panic(err)
@@ -39,8 +67,16 @@ func (dpd *DBPackageDelete) ExecX(ctx context.Context) int {
return n
}
func (dpd *DBPackageDelete) sqlExec(ctx context.Context) (int, error) {
_spec := sqlgraph.NewDeleteSpec(dbpackage.Table, sqlgraph.NewFieldSpec(dbpackage.FieldID, field.TypeInt))
func (dpd *DbPackageDelete) sqlExec(ctx context.Context) (int, error) {
_spec := &sqlgraph.DeleteSpec{
Node: &sqlgraph.NodeSpec{
Table: dbpackage.Table,
ID: &sqlgraph.FieldSpec{
Type: field.TypeInt,
Column: dbpackage.FieldID,
},
},
}
if ps := dpd.mutation.predicates; len(ps) > 0 {
_spec.Predicate = func(selector *sql.Selector) {
for i := range ps {
@@ -48,27 +84,16 @@ func (dpd *DBPackageDelete) sqlExec(ctx context.Context) (int, error) {
}
}
}
affected, err := sqlgraph.DeleteNodes(ctx, dpd.driver, _spec)
if err != nil && sqlgraph.IsConstraintError(err) {
err = &ConstraintError{msg: err.Error(), wrap: err}
}
dpd.mutation.done = true
return affected, err
return sqlgraph.DeleteNodes(ctx, dpd.driver, _spec)
}
// DBPackageDeleteOne is the builder for deleting a single DBPackage entity.
type DBPackageDeleteOne struct {
dpd *DBPackageDelete
}
// Where appends a list predicates to the DBPackageDelete builder.
func (dpdo *DBPackageDeleteOne) Where(ps ...predicate.DBPackage) *DBPackageDeleteOne {
dpdo.dpd.mutation.Where(ps...)
return dpdo
// DbPackageDeleteOne is the builder for deleting a single DbPackage entity.
type DbPackageDeleteOne struct {
dpd *DbPackageDelete
}
// Exec executes the deletion query.
func (dpdo *DBPackageDeleteOne) Exec(ctx context.Context) error {
func (dpdo *DbPackageDeleteOne) Exec(ctx context.Context) error {
n, err := dpdo.dpd.Exec(ctx)
switch {
case err != nil:
@@ -81,8 +106,6 @@ func (dpdo *DBPackageDeleteOne) Exec(ctx context.Context) error {
}
// ExecX is like Exec, but panics if an error occurs.
func (dpdo *DBPackageDeleteOne) ExecX(ctx context.Context) {
if err := dpdo.Exec(ctx); err != nil {
panic(err)
}
func (dpdo *DbPackageDeleteOne) ExecX(ctx context.Context) {
dpdo.dpd.ExecX(ctx)
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,89 +1,56 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package ent
import (
"context"
"errors"
"fmt"
"reflect"
"sync"
"entgo.io/ent"
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqlgraph"
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
"git.harting.dev/ALHP/ALHP.GO/ent/dbpackage"
)
// ent aliases to avoid import conflicts in user's code.
type (
Op = ent.Op
Hook = ent.Hook
Value = ent.Value
Query = ent.Query
QueryContext = ent.QueryContext
Querier = ent.Querier
QuerierFunc = ent.QuerierFunc
Interceptor = ent.Interceptor
InterceptFunc = ent.InterceptFunc
Traverser = ent.Traverser
TraverseFunc = ent.TraverseFunc
Policy = ent.Policy
Mutator = ent.Mutator
Mutation = ent.Mutation
MutateFunc = ent.MutateFunc
Op = ent.Op
Hook = ent.Hook
Value = ent.Value
Query = ent.Query
Policy = ent.Policy
Mutator = ent.Mutator
Mutation = ent.Mutation
MutateFunc = ent.MutateFunc
)
type clientCtxKey struct{}
// FromContext returns a Client stored inside a context, or nil if there isn't one.
func FromContext(ctx context.Context) *Client {
c, _ := ctx.Value(clientCtxKey{}).(*Client)
return c
}
// NewContext returns a new context with the given Client attached.
func NewContext(parent context.Context, c *Client) context.Context {
return context.WithValue(parent, clientCtxKey{}, c)
}
type txCtxKey struct{}
// TxFromContext returns a Tx stored inside a context, or nil if there isn't one.
func TxFromContext(ctx context.Context) *Tx {
tx, _ := ctx.Value(txCtxKey{}).(*Tx)
return tx
}
// NewTxContext returns a new context with the given Tx attached.
func NewTxContext(parent context.Context, tx *Tx) context.Context {
return context.WithValue(parent, txCtxKey{}, tx)
}
// OrderFunc applies an ordering on the sql selector.
// Deprecated: Use Asc/Desc functions or the package builders instead.
type OrderFunc func(*sql.Selector)
var (
initCheck sync.Once
columnCheck sql.ColumnCheck
)
// checkColumn checks if the column exists in the given table.
func checkColumn(table, column string) error {
initCheck.Do(func() {
columnCheck = sql.NewColumnCheck(map[string]func(string) bool{
dbpackage.Table: dbpackage.ValidColumn,
})
})
return columnCheck(table, column)
// columnChecker returns a function indicates if the column exists in the given column.
func columnChecker(table string) func(string) error {
checks := map[string]func(string) bool{
dbpackage.Table: dbpackage.ValidColumn,
}
check, ok := checks[table]
if !ok {
return func(string) error {
return fmt.Errorf("unknown table %q", table)
}
}
return func(column string) error {
if !check(column) {
return fmt.Errorf("unknown column %q for table %q", column, table)
}
return nil
}
}
// Asc applies the given fields in ASC order.
func Asc(fields ...string) func(*sql.Selector) {
func Asc(fields ...string) OrderFunc {
return func(s *sql.Selector) {
check := columnChecker(s.TableName())
for _, f := range fields {
if err := checkColumn(s.TableName(), f); err != nil {
if err := check(f); err != nil {
s.AddError(&ValidationError{Name: f, err: fmt.Errorf("ent: %w", err)})
}
s.OrderBy(sql.Asc(s.C(f)))
@@ -92,10 +59,11 @@ func Asc(fields ...string) func(*sql.Selector) {
}
// Desc applies the given fields in DESC order.
func Desc(fields ...string) func(*sql.Selector) {
func Desc(fields ...string) OrderFunc {
return func(s *sql.Selector) {
check := columnChecker(s.TableName())
for _, f := range fields {
if err := checkColumn(s.TableName(), f); err != nil {
if err := check(f); err != nil {
s.AddError(&ValidationError{Name: f, err: fmt.Errorf("ent: %w", err)})
}
s.OrderBy(sql.Desc(s.C(f)))
@@ -111,6 +79,7 @@ type AggregateFunc func(*sql.Selector) string
// GroupBy(field1, field2).
// Aggregate(ent.As(ent.Sum(field1), "sum_field1"), (ent.As(ent.Sum(field2), "sum_field2")).
// Scan(ctx, &v)
//
func As(fn AggregateFunc, end string) AggregateFunc {
return func(s *sql.Selector) string {
return sql.As(fn(s), end)
@@ -127,7 +96,8 @@ func Count() AggregateFunc {
// Max applies the "max" aggregation function on the given field of each group.
func Max(field string) AggregateFunc {
return func(s *sql.Selector) string {
if err := checkColumn(s.TableName(), field); err != nil {
check := columnChecker(s.TableName())
if err := check(field); err != nil {
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
return ""
}
@@ -138,7 +108,8 @@ func Max(field string) AggregateFunc {
// Mean applies the "mean" aggregation function on the given field of each group.
func Mean(field string) AggregateFunc {
return func(s *sql.Selector) string {
if err := checkColumn(s.TableName(), field); err != nil {
check := columnChecker(s.TableName())
if err := check(field); err != nil {
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
return ""
}
@@ -149,7 +120,8 @@ func Mean(field string) AggregateFunc {
// Min applies the "min" aggregation function on the given field of each group.
func Min(field string) AggregateFunc {
return func(s *sql.Selector) string {
if err := checkColumn(s.TableName(), field); err != nil {
check := columnChecker(s.TableName())
if err := check(field); err != nil {
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
return ""
}
@@ -160,7 +132,8 @@ func Min(field string) AggregateFunc {
// Sum applies the "sum" aggregation function on the given field of each group.
func Sum(field string) AggregateFunc {
return func(s *sql.Selector) string {
if err := checkColumn(s.TableName(), field); err != nil {
check := columnChecker(s.TableName())
if err := check(field); err != nil {
s.AddError(&ValidationError{Name: field, err: fmt.Errorf("ent: %w", err)})
return ""
}
@@ -284,325 +257,3 @@ func IsConstraintError(err error) bool {
var e *ConstraintError
return errors.As(err, &e)
}
// selector embedded by the different Select/GroupBy builders.
type selector struct {
label string
flds *[]string
fns []AggregateFunc
scan func(context.Context, any) error
}
// ScanX is like Scan, but panics if an error occurs.
func (s *selector) ScanX(ctx context.Context, v any) {
if err := s.scan(ctx, v); err != nil {
panic(err)
}
}
// Strings returns list of strings from a selector. It is only allowed when selecting one field.
func (s *selector) Strings(ctx context.Context) ([]string, error) {
if len(*s.flds) > 1 {
return nil, errors.New("ent: Strings is not achievable when selecting more than 1 field")
}
var v []string
if err := s.scan(ctx, &v); err != nil {
return nil, err
}
return v, nil
}
// StringsX is like Strings, but panics if an error occurs.
func (s *selector) StringsX(ctx context.Context) []string {
v, err := s.Strings(ctx)
if err != nil {
panic(err)
}
return v
}
// String returns a single string from a selector. It is only allowed when selecting one field.
func (s *selector) String(ctx context.Context) (_ string, err error) {
var v []string
if v, err = s.Strings(ctx); err != nil {
return
}
switch len(v) {
case 1:
return v[0], nil
case 0:
err = &NotFoundError{s.label}
default:
err = fmt.Errorf("ent: Strings returned %d results when one was expected", len(v))
}
return
}
// StringX is like String, but panics if an error occurs.
func (s *selector) StringX(ctx context.Context) string {
v, err := s.String(ctx)
if err != nil {
panic(err)
}
return v
}
// Ints returns list of ints from a selector. It is only allowed when selecting one field.
func (s *selector) Ints(ctx context.Context) ([]int, error) {
if len(*s.flds) > 1 {
return nil, errors.New("ent: Ints is not achievable when selecting more than 1 field")
}
var v []int
if err := s.scan(ctx, &v); err != nil {
return nil, err
}
return v, nil
}
// IntsX is like Ints, but panics if an error occurs.
func (s *selector) IntsX(ctx context.Context) []int {
v, err := s.Ints(ctx)
if err != nil {
panic(err)
}
return v
}
// Int returns a single int from a selector. It is only allowed when selecting one field.
func (s *selector) Int(ctx context.Context) (_ int, err error) {
var v []int
if v, err = s.Ints(ctx); err != nil {
return
}
switch len(v) {
case 1:
return v[0], nil
case 0:
err = &NotFoundError{s.label}
default:
err = fmt.Errorf("ent: Ints returned %d results when one was expected", len(v))
}
return
}
// IntX is like Int, but panics if an error occurs.
func (s *selector) IntX(ctx context.Context) int {
v, err := s.Int(ctx)
if err != nil {
panic(err)
}
return v
}
// Float64s returns list of float64s from a selector. It is only allowed when selecting one field.
func (s *selector) Float64s(ctx context.Context) ([]float64, error) {
if len(*s.flds) > 1 {
return nil, errors.New("ent: Float64s is not achievable when selecting more than 1 field")
}
var v []float64
if err := s.scan(ctx, &v); err != nil {
return nil, err
}
return v, nil
}
// Float64sX is like Float64s, but panics if an error occurs.
func (s *selector) Float64sX(ctx context.Context) []float64 {
v, err := s.Float64s(ctx)
if err != nil {
panic(err)
}
return v
}
// Float64 returns a single float64 from a selector. It is only allowed when selecting one field.
func (s *selector) Float64(ctx context.Context) (_ float64, err error) {
var v []float64
if v, err = s.Float64s(ctx); err != nil {
return
}
switch len(v) {
case 1:
return v[0], nil
case 0:
err = &NotFoundError{s.label}
default:
err = fmt.Errorf("ent: Float64s returned %d results when one was expected", len(v))
}
return
}
// Float64X is like Float64, but panics if an error occurs.
func (s *selector) Float64X(ctx context.Context) float64 {
v, err := s.Float64(ctx)
if err != nil {
panic(err)
}
return v
}
// Bools returns list of bools from a selector. It is only allowed when selecting one field.
func (s *selector) Bools(ctx context.Context) ([]bool, error) {
if len(*s.flds) > 1 {
return nil, errors.New("ent: Bools is not achievable when selecting more than 1 field")
}
var v []bool
if err := s.scan(ctx, &v); err != nil {
return nil, err
}
return v, nil
}
// BoolsX is like Bools, but panics if an error occurs.
func (s *selector) BoolsX(ctx context.Context) []bool {
v, err := s.Bools(ctx)
if err != nil {
panic(err)
}
return v
}
// Bool returns a single bool from a selector. It is only allowed when selecting one field.
func (s *selector) Bool(ctx context.Context) (_ bool, err error) {
var v []bool
if v, err = s.Bools(ctx); err != nil {
return
}
switch len(v) {
case 1:
return v[0], nil
case 0:
err = &NotFoundError{s.label}
default:
err = fmt.Errorf("ent: Bools returned %d results when one was expected", len(v))
}
return
}
// BoolX is like Bool, but panics if an error occurs.
func (s *selector) BoolX(ctx context.Context) bool {
v, err := s.Bool(ctx)
if err != nil {
panic(err)
}
return v
}
// withHooks invokes the builder operation with the given hooks, if any.
func withHooks[V Value, M any, PM interface {
*M
Mutation
}](ctx context.Context, exec func(context.Context) (V, error), mutation PM, hooks []Hook) (value V, err error) {
if len(hooks) == 0 {
return exec(ctx)
}
var mut Mutator = MutateFunc(func(ctx context.Context, m Mutation) (Value, error) {
mutationT, ok := any(m).(PM)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T", m)
}
// Set the mutation to the builder.
*mutation = *mutationT
return exec(ctx)
})
for i := len(hooks) - 1; i >= 0; i-- {
if hooks[i] == nil {
return value, fmt.Errorf("ent: uninitialized hook (forgotten import ent/runtime?)")
}
mut = hooks[i](mut)
}
v, err := mut.Mutate(ctx, mutation)
if err != nil {
return value, err
}
nv, ok := v.(V)
if !ok {
return value, fmt.Errorf("unexpected node type %T returned from %T", v, mutation)
}
return nv, nil
}
// setContextOp returns a new context with the given QueryContext attached (including its op) in case it does not exist.
func setContextOp(ctx context.Context, qc *QueryContext, op string) context.Context {
if ent.QueryFromContext(ctx) == nil {
qc.Op = op
ctx = ent.NewQueryContext(ctx, qc)
}
return ctx
}
func querierAll[V Value, Q interface {
sqlAll(context.Context, ...queryHook) (V, error)
}]() Querier {
return QuerierFunc(func(ctx context.Context, q Query) (Value, error) {
query, ok := q.(Q)
if !ok {
return nil, fmt.Errorf("unexpected query type %T", q)
}
return query.sqlAll(ctx)
})
}
func querierCount[Q interface {
sqlCount(context.Context) (int, error)
}]() Querier {
return QuerierFunc(func(ctx context.Context, q Query) (Value, error) {
query, ok := q.(Q)
if !ok {
return nil, fmt.Errorf("unexpected query type %T", q)
}
return query.sqlCount(ctx)
})
}
func withInterceptors[V Value](ctx context.Context, q Query, qr Querier, inters []Interceptor) (v V, err error) {
for i := len(inters) - 1; i >= 0; i-- {
qr = inters[i].Intercept(qr)
}
rv, err := qr.Query(ctx, q)
if err != nil {
return v, err
}
vt, ok := rv.(V)
if !ok {
return v, fmt.Errorf("unexpected type %T returned from %T. expected type: %T", vt, q, v)
}
return vt, nil
}
func scanWithInterceptors[Q1 ent.Query, Q2 interface {
sqlScan(context.Context, Q1, any) error
}](ctx context.Context, rootQuery Q1, selectOrGroup Q2, inters []Interceptor, v any) error {
rv := reflect.ValueOf(v)
var qr Querier = QuerierFunc(func(ctx context.Context, q Query) (Value, error) {
query, ok := q.(Q1)
if !ok {
return nil, fmt.Errorf("unexpected query type %T", q)
}
if err := selectOrGroup.sqlScan(ctx, query, v); err != nil {
return nil, err
}
if k := rv.Kind(); k == reflect.Pointer && rv.Elem().CanInterface() {
return rv.Elem().Interface(), nil
}
return v, nil
})
for i := len(inters) - 1; i >= 0; i-- {
qr = inters[i].Intercept(qr)
}
vv, err := qr.Query(ctx, rootQuery)
if err != nil {
return err
}
switch rv2 := reflect.ValueOf(vv); {
case rv.IsNil(), rv2.IsNil(), rv.Kind() != reflect.Pointer:
case rv.Type() == rv2.Type():
rv.Elem().Set(rv2.Elem())
case rv.Elem().Type() == rv2.Type():
rv.Elem().Set(rv2)
}
return nil
}
// queryHook describes an internal hook for the different sqlAll methods.
type queryHook func(context.Context, *sqlgraph.QuerySpec)

View File

@@ -1,16 +1,15 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package enttest
import (
"context"
"somegit.dev/ALHP/ALHP.GO/ent"
"git.harting.dev/ALHP/ALHP.GO/ent"
// required by schema hooks.
_ "somegit.dev/ALHP/ALHP.GO/ent/runtime"
_ "git.harting.dev/ALHP/ALHP.GO/ent/runtime"
"entgo.io/ent/dialect/sql/schema"
"somegit.dev/ALHP/ALHP.GO/ent/migrate"
)
type (
@@ -18,7 +17,7 @@ type (
// testing.T and testing.B and used by enttest.
TestingT interface {
FailNow()
Error(...any)
Error(...interface{})
}
// Option configures client creation.
@@ -60,7 +59,10 @@ func Open(t TestingT, driverName, dataSourceName string, opts ...Option) *ent.Cl
t.Error(err)
t.FailNow()
}
migrateSchema(t, c, o)
if err := c.Schema.Create(context.Background(), o.migrateOpts...); err != nil {
t.Error(err)
t.FailNow()
}
return c
}
@@ -68,17 +70,9 @@ func Open(t TestingT, driverName, dataSourceName string, opts ...Option) *ent.Cl
func NewClient(t TestingT, opts ...Option) *ent.Client {
o := newOptions(opts)
c := ent.NewClient(o.opts...)
migrateSchema(t, c, o)
if err := c.Schema.Create(context.Background(), o.migrateOpts...); err != nil {
t.Error(err)
t.FailNow()
}
return c
}
func migrateSchema(t TestingT, c *ent.Client, o *options) {
tables, err := schema.CopyTables(migrate.Tables)
if err != nil {
t.Error(err)
t.FailNow()
}
if err := migrate.Create(context.Background(), c.Schema, tables, o.migrateOpts...); err != nil {
t.Error(err)
t.FailNow()
}
}

View File

@@ -1,4 +1,4 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package hook
@@ -6,19 +6,20 @@ import (
"context"
"fmt"
"somegit.dev/ALHP/ALHP.GO/ent"
"git.harting.dev/ALHP/ALHP.GO/ent"
)
// The DBPackageFunc type is an adapter to allow the use of ordinary
// function as DBPackage mutator.
type DBPackageFunc func(context.Context, *ent.DBPackageMutation) (ent.Value, error)
// The DbPackageFunc type is an adapter to allow the use of ordinary
// function as DbPackage mutator.
type DbPackageFunc func(context.Context, *ent.DbPackageMutation) (ent.Value, error)
// Mutate calls f(ctx, m).
func (f DBPackageFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
if mv, ok := m.(*ent.DBPackageMutation); ok {
return f(ctx, mv)
func (f DbPackageFunc) Mutate(ctx context.Context, m ent.Mutation) (ent.Value, error) {
mv, ok := m.(*ent.DbPackageMutation)
if !ok {
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.DbPackageMutation", m)
}
return nil, fmt.Errorf("unexpected mutation type %T. expect *ent.DBPackageMutation", m)
return f(ctx, mv)
}
// Condition is a hook condition function.
@@ -116,6 +117,7 @@ func HasFields(field string, fields ...string) Condition {
// If executes the given hook under condition.
//
// hook.If(ComputeAverage, And(HasFields(...), HasAddedFields(...)))
//
func If(hk ent.Hook, cond Condition) ent.Hook {
return func(next ent.Mutator) ent.Mutator {
return ent.MutateFunc(func(ctx context.Context, m ent.Mutation) (ent.Value, error) {
@@ -130,6 +132,7 @@ func If(hk ent.Hook, cond Condition) ent.Hook {
// On executes the given hook only for the given operation.
//
// hook.On(Log, ent.Delete|ent.Create)
//
func On(hk ent.Hook, op ent.Op) ent.Hook {
return If(hk, HasOp(op))
}
@@ -137,6 +140,7 @@ func On(hk ent.Hook, op ent.Op) ent.Hook {
// Unless skips the given hook only for the given operation.
//
// hook.Unless(Log, ent.Update|ent.UpdateOne)
//
func Unless(hk ent.Hook, op ent.Op) ent.Hook {
return If(hk, Not(HasOp(op)))
}
@@ -157,6 +161,7 @@ func FixedError(err error) ent.Hook {
// Reject(ent.Delete|ent.Update),
// }
// }
//
func Reject(op ent.Op) ent.Hook {
hk := FixedError(fmt.Errorf("%s operation is not allowed", op))
return On(hk, op)

View File

@@ -1,4 +1,4 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package migrate
@@ -28,6 +28,9 @@ var (
// and therefore, it's recommended to enable this option to get more
// flexibility in the schema changes.
WithDropIndex = schema.WithDropIndex
// WithFixture sets the foreign-key renaming option to the migration when upgrading
// ent from v0.1.0 (issue-#285). Defaults to false.
WithFixture = schema.WithFixture
// WithForeignKeys enables creating foreign-key in schema DDL. This defaults to true.
WithForeignKeys = schema.WithForeignKeys
)
@@ -42,23 +45,27 @@ func NewSchema(drv dialect.Driver) *Schema { return &Schema{drv: drv} }
// Create creates all schema resources.
func (s *Schema) Create(ctx context.Context, opts ...schema.MigrateOption) error {
return Create(ctx, s, Tables, opts...)
}
// Create creates all table resources using the given schema driver.
func Create(ctx context.Context, s *Schema, tables []*schema.Table, opts ...schema.MigrateOption) error {
migrate, err := schema.NewMigrate(s.drv, opts...)
if err != nil {
return fmt.Errorf("ent/migrate: %w", err)
}
return migrate.Create(ctx, tables...)
return migrate.Create(ctx, Tables...)
}
// WriteTo writes the schema changes to w instead of running them against the database.
//
// if err := client.Schema.WriteTo(context.Background(), os.Stdout); err != nil {
// if err := client.Schema.WriteTo(context.Background(), os.Stdout); err != nil {
// log.Fatal(err)
// }
// }
//
func (s *Schema) WriteTo(ctx context.Context, w io.Writer, opts ...schema.MigrateOption) error {
return Create(ctx, &Schema{drv: &schema.WriteDriver{Writer: w, Driver: s.drv}}, Tables, opts...)
drv := &schema.WriteDriver{
Writer: w,
Driver: s.drv,
}
migrate, err := schema.NewMigrate(drv, opts...)
if err != nil {
return fmt.Errorf("ent/migrate: %w", err)
}
return migrate.Create(ctx, Tables...)
}

View File

@@ -1,4 +1,4 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package migrate
@@ -13,14 +13,15 @@ var (
{Name: "id", Type: field.TypeInt, Increment: true},
{Name: "pkgbase", Type: field.TypeString},
{Name: "packages", Type: field.TypeJSON, Nullable: true},
{Name: "status", Type: field.TypeEnum, Nullable: true, Enums: []string{"skipped", "failed", "built", "queued", "delayed", "building", "latest", "signing", "unknown"}, Default: "unknown"},
{Name: "status", Type: field.TypeEnum, Nullable: true, Enums: []string{"skipped", "failed", "build", "queued", "building", "latest", "signing", "unknown"}, Default: "unknown"},
{Name: "skip_reason", Type: field.TypeString, Nullable: true},
{Name: "repository", Type: field.TypeEnum, Enums: []string{"extra", "core", "multilib"}},
{Name: "repository", Type: field.TypeEnum, Enums: []string{"extra", "core", "community"}},
{Name: "march", Type: field.TypeString},
{Name: "version", Type: field.TypeString, Nullable: true},
{Name: "repo_version", Type: field.TypeString, Nullable: true},
{Name: "build_time_start", Type: field.TypeTime, Nullable: true},
{Name: "updated", Type: field.TypeTime, Nullable: true},
{Name: "hash", Type: field.TypeString, Nullable: true},
{Name: "lto", Type: field.TypeEnum, Nullable: true, Enums: []string{"enabled", "unknown", "disabled", "auto_disabled"}, Default: "unknown"},
{Name: "last_version_build", Type: field.TypeString, Nullable: true},
{Name: "last_verified", Type: field.TypeTime, Nullable: true},
@@ -30,7 +31,6 @@ var (
{Name: "s_time", Type: field.TypeInt64, Nullable: true},
{Name: "io_in", Type: field.TypeInt64, Nullable: true},
{Name: "io_out", Type: field.TypeInt64, Nullable: true},
{Name: "tag_rev", Type: field.TypeString, Nullable: true},
}
// DbPackagesTable holds the schema information for the "db_packages" table.
DbPackagesTable = &schema.Table{

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package predicate
@@ -6,5 +6,5 @@ import (
"entgo.io/ent/dialect/sql"
)
// DBPackage is the predicate function for dbpackage builders.
type DBPackage func(*sql.Selector)
// DbPackage is the predicate function for dbpackage builders.
type DbPackage func(*sql.Selector)

View File

@@ -1,17 +1,17 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package ent
import (
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
"somegit.dev/ALHP/ALHP.GO/ent/schema"
"git.harting.dev/ALHP/ALHP.GO/ent/dbpackage"
"git.harting.dev/ALHP/ALHP.GO/ent/schema"
)
// The init function reads all schema descriptors with runtime code
// (default values, validators, hooks and policies) and stitches it
// to their package variables.
func init() {
dbpackageFields := schema.DBPackage{}.Fields()
dbpackageFields := schema.DbPackage{}.Fields()
_ = dbpackageFields
// dbpackageDescPkgbase is the schema descriptor for pkgbase field.
dbpackageDescPkgbase := dbpackageFields[0].Descriptor()

View File

@@ -1,10 +1,10 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package runtime
// The schema-stitching logic is generated in somegit.dev/ALHP/ALHP.GO/ent/runtime.go
// The schema-stitching logic is generated in git.harting.dev/ALHP/ALHP.GO/ent/runtime.go
const (
Version = "v0.14.2" // Version of ent codegen.
Sum = "h1:ywld/j2Rx4EmnIKs8eZ29cbFA1zpB+DA9TLL5l3rlq0=" // Sum of ent codegen.
Version = "v0.10.1" // Version of ent codegen.
Sum = "h1:dM5h4Zk6yHGIgw4dCqVzGw3nWgpGYJiV4/kyHEF6PFo=" // Sum of ent codegen.
)

View File

@@ -5,25 +5,25 @@ import (
"entgo.io/ent/schema/field"
)
// DBPackage holds the schema definition for the DbPackage entity.
type DBPackage struct {
// DbPackage holds the schema definition for the DbPackage entity.
type DbPackage struct {
ent.Schema
}
// Fields of the DBPackage.
func (DBPackage) Fields() []ent.Field {
// Fields of the DbPackage.
func (DbPackage) Fields() []ent.Field {
return []ent.Field{
field.String("pkgbase").NotEmpty().Immutable(),
field.Strings("packages").Optional(),
field.Enum("status").Values("skipped", "failed", "built", "queued", "delayed", "building",
"latest", "signing", "unknown").Default("unknown").Optional(),
field.Enum("status").Values("skipped", "failed", "build", "queued", "building", "latest", "signing", "unknown").Default("unknown").Optional(),
field.String("skip_reason").Optional(),
field.Enum("repository").Values("extra", "core", "multilib"),
field.Enum("repository").Values("extra", "core", "community"),
field.String("march").NotEmpty().Immutable(),
field.String("version").Optional(),
field.String("repo_version").Optional(),
field.Time("build_time_start").Optional(),
field.Time("updated").Optional(),
field.String("hash").Optional(),
field.Enum("lto").Values("enabled", "unknown", "disabled", "auto_disabled").Default("unknown").Optional(),
field.String("last_version_build").Optional(),
field.Time("last_verified").Optional(),
@@ -33,11 +33,10 @@ func (DBPackage) Fields() []ent.Field {
field.Int64("s_time").Optional().Nillable(),
field.Int64("io_in").Optional().Nillable(),
field.Int64("io_out").Optional().Nillable(),
field.String("tag_rev").Optional().Nillable(),
}
}
// Edges of the DBPackage.
func (DBPackage) Edges() []ent.Edge {
// Edges of the DbPackage.
func (DbPackage) Edges() []ent.Edge {
return nil
}

View File

@@ -1,4 +1,4 @@
// Code generated by ent, DO NOT EDIT.
// Code generated by entc, DO NOT EDIT.
package ent
@@ -12,12 +12,18 @@ import (
// Tx is a transactional client that is created by calling Client.Tx().
type Tx struct {
config
// DBPackage is the client for interacting with the DBPackage builders.
DBPackage *DBPackageClient
// DbPackage is the client for interacting with the DbPackage builders.
DbPackage *DbPackageClient
// lazily loaded.
client *Client
clientOnce sync.Once
// completion callbacks.
mu sync.Mutex
onCommit []CommitHook
onRollback []RollbackHook
// ctx lives for the life of the transaction. It is
// the same context used by the underlying connection.
ctx context.Context
@@ -62,9 +68,9 @@ func (tx *Tx) Commit() error {
var fn Committer = CommitFunc(func(context.Context, *Tx) error {
return txDriver.tx.Commit()
})
txDriver.mu.Lock()
hooks := append([]CommitHook(nil), txDriver.onCommit...)
txDriver.mu.Unlock()
tx.mu.Lock()
hooks := append([]CommitHook(nil), tx.onCommit...)
tx.mu.Unlock()
for i := len(hooks) - 1; i >= 0; i-- {
fn = hooks[i](fn)
}
@@ -73,10 +79,9 @@ func (tx *Tx) Commit() error {
// OnCommit adds a hook to call on commit.
func (tx *Tx) OnCommit(f CommitHook) {
txDriver := tx.config.driver.(*txDriver)
txDriver.mu.Lock()
txDriver.onCommit = append(txDriver.onCommit, f)
txDriver.mu.Unlock()
tx.mu.Lock()
defer tx.mu.Unlock()
tx.onCommit = append(tx.onCommit, f)
}
type (
@@ -118,9 +123,9 @@ func (tx *Tx) Rollback() error {
var fn Rollbacker = RollbackFunc(func(context.Context, *Tx) error {
return txDriver.tx.Rollback()
})
txDriver.mu.Lock()
hooks := append([]RollbackHook(nil), txDriver.onRollback...)
txDriver.mu.Unlock()
tx.mu.Lock()
hooks := append([]RollbackHook(nil), tx.onRollback...)
tx.mu.Unlock()
for i := len(hooks) - 1; i >= 0; i-- {
fn = hooks[i](fn)
}
@@ -129,10 +134,9 @@ func (tx *Tx) Rollback() error {
// OnRollback adds a hook to call on rollback.
func (tx *Tx) OnRollback(f RollbackHook) {
txDriver := tx.config.driver.(*txDriver)
txDriver.mu.Lock()
txDriver.onRollback = append(txDriver.onRollback, f)
txDriver.mu.Unlock()
tx.mu.Lock()
defer tx.mu.Unlock()
tx.onRollback = append(tx.onRollback, f)
}
// Client returns a Client that binds to current transaction.
@@ -145,7 +149,7 @@ func (tx *Tx) Client() *Client {
}
func (tx *Tx) init() {
tx.DBPackage = NewDBPackageClient(tx.config)
tx.DbPackage = NewDbPackageClient(tx.config)
}
// txDriver wraps the given dialect.Tx with a nop dialect.Driver implementation.
@@ -155,7 +159,7 @@ func (tx *Tx) init() {
// of them in order to commit or rollback the transaction.
//
// If a closed transaction is embedded in one of the generated entities, and the entity
// applies a query, for example: DBPackage.QueryXXX(), the query will be executed
// applies a query, for example: DbPackage.QueryXXX(), the query will be executed
// through the driver which created this transaction.
//
// Note that txDriver is not goroutine safe.
@@ -164,10 +168,6 @@ type txDriver struct {
drv dialect.Driver
// tx is the underlying transaction.
tx dialect.Tx
// completion hooks.
mu sync.Mutex
onCommit []CommitHook
onRollback []RollbackHook
}
// newTx creates a new transactional driver.
@@ -198,12 +198,12 @@ func (*txDriver) Commit() error { return nil }
func (*txDriver) Rollback() error { return nil }
// Exec calls tx.Exec.
func (tx *txDriver) Exec(ctx context.Context, query string, args, v any) error {
func (tx *txDriver) Exec(ctx context.Context, query string, args, v interface{}) error {
return tx.tx.Exec(ctx, query, args, v)
}
// Query calls tx.Query.
func (tx *txDriver) Query(ctx context.Context, query string, args, v any) error {
func (tx *txDriver) Query(ctx context.Context, query string, args, v interface{}) error {
return tx.tx.Query(ctx, query, args, v)
}

View File

@@ -1,44 +0,0 @@
# template values get replaced on makepkg.conf generation
# $level$ -> march x86-64 level, e.g. v3
# $march$ -> full march, e.g. x86-64-v3
# $buildproc$ -> number of threads to build with
common:
cflags:
- "-mtune=generic": ~
- "-O2": "-O3"
- "-mpclmul" # https://somegit.dev/ALHP/ALHP.GO/issues/92
- "-march=x86-64": "-march=$march$"
options:
- "lto": "!lto" # disable lto; see 'lto' section below
buildenv:
- "color": "!color" # color messes up the log output
goamd64: "$level$" # https://somegit.dev/ALHP/ALHP.GO/issues/116
packager: "ALHP $march$ <alhp@harting.dev>"
makeflags: "-j$buildproc$"
# https://somegit.dev/ALHP/ALHP.GO/issues/110
rustflags:
- "-Copt-level=3"
- "-Ctarget-cpu=$march$"
- "-Clink-arg=-z"
- "-Clink-arg=pack-relative-relocs"
ltoflags:
- "-falign-functions=32" # https://github.com/InBetweenNames/gentooLTO/issues/164
kcflags: " -march=$march$ -O3"
kcppflags: " -march=$march$ -O3"
fcflags: "$FFLAGS"
fflags:
- "-O2": "-O3"
- "-march=$march$"
lto:
rustflags:
- "-Ccodegen-units=1"
options:
- "!lto": "lto"
cargo_profile_release_lto: "fat"

67
go.mod
View File

@@ -1,58 +1,41 @@
module somegit.dev/ALHP/ALHP.GO
module git.harting.dev/ALHP/ALHP.GO
go 1.23.0
toolchain go1.23.1
go 1.18
require (
entgo.io/ent v0.14.3
github.com/Jguer/go-alpm/v2 v2.2.2
entgo.io/ent v0.10.2-0.20220502113020-4ac82f5bb3f0
github.com/Jguer/go-alpm/v2 v2.1.2
github.com/Morganamilo/go-pacmanconf v0.0.0-20210502114700-cff030e927a5
github.com/Morganamilo/go-srcinfo v1.0.0
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500
github.com/gobwas/glob v0.2.3
github.com/google/uuid v1.6.0
github.com/jackc/pgx/v4 v4.18.3
github.com/otiai10/copy v1.14.1
github.com/prometheus/client_golang v1.21.1
github.com/sethvargo/go-retry v0.3.0
github.com/sirupsen/logrus v1.9.3
github.com/wercker/journalhook v0.0.0-20230927020745-64542ffa4117
github.com/google/uuid v1.3.0
github.com/jackc/pgx/v4 v4.16.1
github.com/sirupsen/logrus v1.8.1
github.com/wercker/journalhook v0.0.0-20180428041537-5d0a5ae867b3
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
gopkg.in/yaml.v2 v2.4.0
lukechampine.com/blake3 v1.1.7
)
require (
ariga.io/atlas v0.32.0 // indirect
ariga.io/atlas v0.3.8 // indirect
github.com/agext/levenshtein v1.2.3 // indirect
github.com/apparentlymart/go-textseg/v15 v15.0.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bmatcuk/doublestar v1.3.4 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect
github.com/go-openapi/inflect v0.21.1 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/hashicorp/hcl/v2 v2.23.0 // indirect
github.com/go-openapi/inflect v0.19.0 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/hashicorp/hcl/v2 v2.12.0 // indirect
github.com/jackc/chunkreader/v2 v2.0.1 // indirect
github.com/jackc/pgconn v1.14.3 // indirect
github.com/jackc/pgconn v1.12.1 // indirect
github.com/jackc/pgio v1.0.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgproto3/v2 v2.3.3 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgtype v1.14.4 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/jackc/pgproto3/v2 v2.3.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b // indirect
github.com/jackc/pgtype v1.11.0 // indirect
github.com/klauspost/cpuid/v2 v2.0.12 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/otiai10/mint v1.6.3 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.63.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/zclconf/go-cty v1.16.2 // indirect
github.com/zclconf/go-cty-yaml v1.1.0 // indirect
golang.org/x/crypto v0.36.0 // indirect
golang.org/x/mod v0.24.0 // indirect
golang.org/x/sync v0.12.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/text v0.23.0 // indirect
golang.org/x/tools v0.31.0 // indirect
google.golang.org/protobuf v1.36.5 // indirect
github.com/zclconf/go-cty v1.10.0 // indirect
golang.org/x/crypto v0.0.0-20220507011949-2cf3adece122 // indirect
golang.org/x/mod v0.5.1 // indirect
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6 // indirect
golang.org/x/text v0.3.7 // indirect
)

242
go.sum
View File

@@ -1,34 +1,24 @@
ariga.io/atlas v0.31.1-0.20250212144724-069be8033e83 h1:nX4HXncwIdvQ8/8sIUIf1nyCkK8qdBaHQ7EtzPpuiGE=
ariga.io/atlas v0.31.1-0.20250212144724-069be8033e83/go.mod h1:Oe1xWPuu5q9LzyrWfbZmEZxFYeu4BHTyzfjeW2aZp/w=
ariga.io/atlas v0.32.0 h1:y+77nueMrExLiKlz1CcPKh/nU7VSlWfBbwCShsJyvCw=
ariga.io/atlas v0.32.0/go.mod h1:Oe1xWPuu5q9LzyrWfbZmEZxFYeu4BHTyzfjeW2aZp/w=
entgo.io/ent v0.14.2 h1:ywld/j2Rx4EmnIKs8eZ29cbFA1zpB+DA9TLL5l3rlq0=
entgo.io/ent v0.14.2/go.mod h1:aDPE/OziPEu8+OWbzy4UlvWmD2/kbRuWfK2A40hcxJM=
entgo.io/ent v0.14.3 h1:wokAV/kIlH9TeklJWGGS7AYJdVckr0DloWjIcO9iIIQ=
entgo.io/ent v0.14.3/go.mod h1:aDPE/OziPEu8+OWbzy4UlvWmD2/kbRuWfK2A40hcxJM=
ariga.io/atlas v0.3.8 h1:O1JH6fmCc1P8nUtEMVW6Xju/ASiTeqG2JCgxzaj0SaE=
ariga.io/atlas v0.3.8/go.mod h1:D/d0a5QyMFU2R5E8ArmpnWbMjFP9LOr0TsQNbKqhT20=
entgo.io/ent v0.10.2-0.20220502113020-4ac82f5bb3f0 h1:qHA4+ANAzDj6BcDLxNgZuzKxFre/RI9r5wwsI2O+1M4=
entgo.io/ent v0.10.2-0.20220502113020-4ac82f5bb3f0/go.mod h1:Zh61BPvB+cL6VWEyN8f1YoDacrMjQf2KDlDeX26xq2k=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60=
github.com/DATA-DOG/go-sqlmock v1.5.0/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
github.com/Jguer/go-alpm/v2 v2.2.2 h1:sPwUoZp1X5Tw6K6Ba1lWvVJfcgVNEGVcxARLBttZnC0=
github.com/Jguer/go-alpm/v2 v2.2.2/go.mod h1:lfe8gSe83F/KERaQvEfrSqQ4n+8bES+ZIyKWR/gm3MI=
github.com/Jguer/go-alpm/v2 v2.1.2 h1:CGTIxzuEpT9Q3a7IBrx0E6acoYoaHX2Z93UOApPDhgU=
github.com/Jguer/go-alpm/v2 v2.1.2/go.mod h1:uLQcTMNM904dRiGU+/JDtDdd7Nd8mVbEVaHjhmziT7w=
github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
github.com/Morganamilo/go-pacmanconf v0.0.0-20210502114700-cff030e927a5 h1:TMscPjkb1ThXN32LuFY5bEYIcXZx3YlwzhS1GxNpn/c=
github.com/Morganamilo/go-pacmanconf v0.0.0-20210502114700-cff030e927a5/go.mod h1:Hk55m330jNiwxRodIlMCvw5iEyoRUCIY64W1p9D+tHc=
github.com/Morganamilo/go-srcinfo v1.0.0 h1:Wh4nEF+HJWo+29hnxM18Q2hi+DUf0GejS13+Wg+dzmI=
github.com/Morganamilo/go-srcinfo v1.0.0/go.mod h1:MP6VGY1NNpVUmYIEgoM9acix95KQqIRyqQ0hCLsyYUY=
github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo=
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
github.com/apparentlymart/go-textseg/v15 v15.0.0 h1:uYvfpb3DyLSCGWnctWKGj857c6ew1u1fNQOlOtuGxQY=
github.com/apparentlymart/go-textseg/v15 v15.0.0/go.mod h1:K8XmNZdhEBkdlyDdvbmmsvpAG721bKi0joRfFdHIWJ4=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bmatcuk/doublestar v1.3.4 h1:gPypJ5xD31uhX6Tf54sDPUOBXTqKH4c9aPY66CyQrS0=
github.com/bmatcuk/doublestar v1.3.4/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE=
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500 h1:6lhrsTEnloDPXyeZBvSYvQf8u86jbKehZPVDDlkgDl4=
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500/go.mod h1:S/7n9copUssQ56c7aAgHqftWO4LTf4xY6CGWt8Bc+3M=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/apparentlymart/go-dump v0.0.0-20180507223929-23540a00eaa3/go.mod h1:oL81AME2rN47vu18xqj1S1jPIPuN7afo62yKTNn3XMM=
github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk=
github.com/apparentlymart/go-textseg/v13 v13.0.0 h1:Y+KvPE1NYz0xl601PVImeQfFyEy6iT90AvPUL1NNfNw=
github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkEghdlcqw7yxLeM89kiTRPUo=
github.com/cockroachdb/apd v1.1.0 h1:3LFP3629v+1aKXU5Q37mxmRxX/pIu1nijXydLShEq5I=
github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
@@ -41,25 +31,24 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-openapi/inflect v0.21.0 h1:FoBjBTQEcbg2cJUWX6uwL9OyIW8eqc9k4KhN4lfbeYk=
github.com/go-openapi/inflect v0.21.0/go.mod h1:INezMuUu7SJQc2AyR3WO0DqqYUJSj8Kb4hBd7WtjlAw=
github.com/go-openapi/inflect v0.21.1 h1:swwdJV4YPbuQaz68rHiBeQj+MWeBjDDNyEAi78Fhu4g=
github.com/go-openapi/inflect v0.21.1/go.mod h1:INezMuUu7SJQc2AyR3WO0DqqYUJSj8Kb4hBd7WtjlAw=
github.com/go-openapi/inflect v0.19.0 h1:9jCH9scKIbHeV9m12SmPilScz6krDxKRasNNSNPXu/4=
github.com/go-openapi/inflect v0.19.0/go.mod h1:lHpZVlpIQqLyKwJ4N+YSc9hchQy/i12fJykb83CRBH4=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/gofrs/uuid v4.0.0+incompatible h1:1SD/1F5pU8p29ybwgQSwpQk+mwdRrXCYuPhW6m+TnJw=
github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/hcl/v2 v2.23.0 h1:Fphj1/gCylPxHutVSEOf2fBOh1VE4AuLV7+kbJf3qos=
github.com/hashicorp/hcl/v2 v2.23.0/go.mod h1:62ZYHrXgPoX8xBnzl8QzbWq4dyDsDtfCRgIq1rbJEvA=
github.com/jackc/chunkreader v1.0.0 h1:4s39bBR8ByfqH+DKm8rQA3E1LHZWB9XWcrz8fqaZbe0=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/hcl/v2 v2.12.0 h1:PsYxySWpMD4KPaoJLnsHwtK5Qptvj/4Q6s0t4sUxZf4=
github.com/hashicorp/hcl/v2 v2.12.0/go.mod h1:FwWsfWEjyV/CMj8s/gqAuiviY72rJ1/oayI9WftqcKg=
github.com/jackc/chunkreader v1.0.0/go.mod h1:RT6O25fNZIuasFJRyZ4R/Y2BbhasbmZXF9QQ7T3kePo=
github.com/jackc/chunkreader/v2 v2.0.0/go.mod h1:odVSm741yZoC3dpHEUXIqA9tQRhFrgOHwnPIn9lDKlk=
github.com/jackc/chunkreader/v2 v2.0.1 h1:i+RDz65UE+mmpjTfyz0MoVTnzeYxroil2G82ki7MGG8=
@@ -70,8 +59,8 @@ github.com/jackc/pgconn v0.0.0-20190831204454-2fabfa3c18b7/go.mod h1:ZJKsE/KZfsU
github.com/jackc/pgconn v1.8.0/go.mod h1:1C2Pb36bGIP9QHGBYCjnyhqu7Rv3sGshaQUvmfGIB/o=
github.com/jackc/pgconn v1.9.0/go.mod h1:YctiPyvzfU11JFxoXokUOOKQXQmDMoJL9vJzHH8/2JY=
github.com/jackc/pgconn v1.9.1-0.20210724152538-d89c8390a530/go.mod h1:4z2w8XhRbP1hYxkpTuBjTS3ne3J48K83+u0zoyvg2pI=
github.com/jackc/pgconn v1.14.3 h1:bVoTr12EGANZz66nZPkMInAV/KHD2TxH9npjXXgiB3w=
github.com/jackc/pgconn v1.14.3/go.mod h1:RZbme4uasqzybK2RK5c65VsHxoyaml09lx3tXOcO/VM=
github.com/jackc/pgconn v1.12.1 h1:rsDFzIpRk7xT4B8FufgpCCeyjdNpKyghZeSefViE5W8=
github.com/jackc/pgconn v1.12.1/go.mod h1:ZkhRC59Llhrq3oSfrikvwQ5NaxYExr6twkdkMLaKono=
github.com/jackc/pgio v1.0.0 h1:g12B9UwVnzGhueNavwioyEEpAmqMe1E/BN9ES+8ovkE=
github.com/jackc/pgio v1.0.0/go.mod h1:oP+2QK2wFfUWgr+gxjoBH9KGBb31Eio69xUb0w5bYf8=
github.com/jackc/pgmock v0.0.0-20190831213851-13a1b77aafa2/go.mod h1:fGZlG77KXmcq05nJLRkk0+p82V8B8Dw8KN2/V9c/OAE=
@@ -80,7 +69,6 @@ github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65 h1:DadwsjnMwFjfWc9y5W
github.com/jackc/pgmock v0.0.0-20210724152146-4ad1a8207f65/go.mod h1:5R2h2EEX+qri8jOWMbJCtaPWkrrNc7OHwsp2TCqp7ak=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgproto3 v1.1.0 h1:FYYE4yRw+AgI8wXIinMlNjBbp/UitDJwfj5LqqewP1A=
github.com/jackc/pgproto3 v1.1.0/go.mod h1:eR5FA3leWg7p9aeAqi37XOTgTIbkABlvcPB3E5rlc78=
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190420180111-c116219b62db/go.mod h1:bhq50y+xrl9n5mRYyCBFKkpRVTLYJVWeCc+mEAI3yXA=
github.com/jackc/pgproto3/v2 v2.0.0-alpha1.0.20190609003834-432c2951c711/go.mod h1:uH0AWtUmuShn0bcesswc4aBTWGvw0cAxIJp+6OB//Wg=
@@ -88,125 +76,92 @@ github.com/jackc/pgproto3/v2 v2.0.0-rc3/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvW
github.com/jackc/pgproto3/v2 v2.0.0-rc3.0.20190831210041-4c03ce451f29/go.mod h1:ryONWYqW6dqSg1Lw6vXNMXoBJhpzvWKnT95C46ckYeM=
github.com/jackc/pgproto3/v2 v2.0.6/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
github.com/jackc/pgproto3/v2 v2.1.1/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
github.com/jackc/pgproto3/v2 v2.3.3 h1:1HLSx5H+tXR9pW3in3zaztoEwQYRC9SQaYUHjTSUOag=
github.com/jackc/pgproto3/v2 v2.3.3/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
github.com/jackc/pgproto3/v2 v2.3.0 h1:brH0pCGBDkBW07HWlN/oSBXrmo3WB0UvZd1pIuDcL8Y=
github.com/jackc/pgproto3/v2 v2.3.0/go.mod h1:WfJCnwN3HIg9Ish/j3sgWXnAfK8A9Y0bwXYU5xKaEdA=
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b h1:C8S2+VttkHFdOOCXJe+YGfa4vHYwlt4Zx+IVXQ97jYg=
github.com/jackc/pgservicefile v0.0.0-20200714003250-2b9c44734f2b/go.mod h1:vsD4gTJCa9TptPL8sPkXrLZ+hDuNrZCnj29CQpr4X1E=
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgtype v0.0.0-20190421001408-4ed0de4755e0/go.mod h1:hdSHsc1V01CGwFsrv11mJRHWJ6aifDLfdV3aVjFF0zg=
github.com/jackc/pgtype v0.0.0-20190824184912-ab885b375b90/go.mod h1:KcahbBH1nCMSo2DXpzsoWOAfFkdEtEJpPbVLq8eE+mc=
github.com/jackc/pgtype v0.0.0-20190828014616-a8802b16cc59/go.mod h1:MWlu30kVJrUS8lot6TQqcg7mtthZ9T0EoIBFiJcmcyw=
github.com/jackc/pgtype v1.8.1-0.20210724151600-32e20a603178/go.mod h1:C516IlIV9NKqfsMCXTdChteoXmwgUceqaLfjg2e3NlM=
github.com/jackc/pgtype v1.14.0/go.mod h1:LUMuVrfsFfdKGLw+AFFVv6KtHOFMwRgDDzBt76IqCA4=
github.com/jackc/pgtype v1.14.4 h1:fKuNiCumbKTAIxQwXfB/nsrnkEI6bPJrrSiMKgbJ2j8=
github.com/jackc/pgtype v1.14.4/go.mod h1:aKeozOde08iifGosdJpz9MBZonJOUJxqNpPBcMJTlVA=
github.com/jackc/pgtype v1.11.0 h1:u4uiGPz/1hryuXzyaBhSk6dnIyyG2683olG2OV+UUgs=
github.com/jackc/pgtype v1.11.0/go.mod h1:LUMuVrfsFfdKGLw+AFFVv6KtHOFMwRgDDzBt76IqCA4=
github.com/jackc/pgx/v4 v4.0.0-20190420224344-cc3461e65d96/go.mod h1:mdxmSJJuR08CZQyj1PVQBHy9XOp5p8/SHH6a0psbY9Y=
github.com/jackc/pgx/v4 v4.0.0-20190421002000-1b8f0016e912/go.mod h1:no/Y67Jkk/9WuGR0JG/JseM9irFbnEPbuWV2EELPNuM=
github.com/jackc/pgx/v4 v4.0.0-pre1.0.20190824185557-6972a5742186/go.mod h1:X+GQnOEnf1dqHGpw7JmHqHc1NxDoalibchSk9/RWuDc=
github.com/jackc/pgx/v4 v4.12.1-0.20210724153913-640aa07df17c/go.mod h1:1QD0+tgSXP7iUjYm9C1NxKhny7lq6ee99u/z+IHFcgs=
github.com/jackc/pgx/v4 v4.18.2/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
github.com/jackc/pgx/v4 v4.18.3 h1:dE2/TrEsGX3RBprb3qryqSV9Y60iZN1C6i8IrmW9/BA=
github.com/jackc/pgx/v4 v4.18.3/go.mod h1:Ey4Oru5tH5sB6tV7hDmfWFahwF15Eb7DNXlRKx2CkVw=
github.com/jackc/pgx/v4 v4.16.1 h1:JzTglcal01DrghUqt+PmzWsZx/Yh7SC/CTQmSBMTd0Y=
github.com/jackc/pgx/v4 v4.16.1/go.mod h1:SIhx0D5hoADaiXZVyv+3gSm3LCIIINTVO0PficsvWGQ=
github.com/jackc/puddle v0.0.0-20190413234325-e4ced69a3a2b/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v0.0.0-20190608224051-11cab39313c9/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v1.1.3/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/jackc/puddle v1.3.0/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/jackc/puddle v1.2.1/go.mod h1:m4B5Dj62Y0fbyuIc15OsIqK0+JU8nkqQjsgx7dvjSWk=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.11 h1:In6xLpyWOi1+C7tXUUWv2ot1QvBjxevKAaI6IXrJmUc=
github.com/klauspost/compress v1.17.11/go.mod h1:pMDklpSncoRMuLFrf1W9Ss9KT+0rH90U12bZKk7uwG0=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.12 h1:p9dKCg8i4gmOxtv35DvrYoWqYzQrvEVdjQ762Y0OqZE=
github.com/klauspost/cpuid/v2 v2.0.12/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348 h1:MtvEpTB6LX3vkb4ax0b5D2DHbNAUsen0Gx5wZoq3lV4=
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.10.2 h1:AqzbZs4ZoCBp+GtejcpCpcxM3zlSMx29dXbUSeVtJb8=
github.com/lib/pq v1.10.2/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.10.5 h1:J+gdV2cUmX7ZqL2B0lFcW0m+egaHC2V3lpO8nWxyYiQ=
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-sqlite3 v1.14.16 h1:yOQRA0RpS5PFz/oikGwBEqvAWhWg5ufRz4ETLjwpU1Y=
github.com/mattn/go-sqlite3 v1.14.16/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
github.com/mattn/go-sqlite3 v1.14.10 h1:MLn+5bFRlWMGoSRmJour3CL1w/qL96mvipqpwQW/Sfk=
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/otiai10/copy v1.14.1 h1:5/7E6qsUMBaH5AnQ0sSLzzTg1oTECmcCmT6lvF45Na8=
github.com/otiai10/copy v1.14.1/go.mod h1:oQwrEDDOci3IM8dJF0d8+jnbfPDllW6vUjNc3DoZm9I=
github.com/otiai10/mint v1.6.3 h1:87qsV/aw1F5as1eH1zS/yqHY85ANKVMgkDrf9rcxbQs=
github.com/otiai10/mint v1.6.3/go.mod h1:MJm72SBthJjz8qhefc4z1PYEieWmy8Bku7CjcAqyUSM=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.21.0 h1:DIsaGmiaBkSangBgMtWdNfxbMNdku5IK6iNhrEqWvdA=
github.com/prometheus/client_golang v1.21.0/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg=
github.com/prometheus/client_golang v1.21.1 h1:DOvXXTqVzvkIewV/CDPFdejpMCGeMcbGCQ8YOmu+Ibk=
github.com/prometheus/client_golang v1.21.1/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/common v0.63.0 h1:YR/EIY1o3mEFP/kZCD7iDMnLPlGyuU2Gb3HIcXnA98k=
github.com/prometheus/common v0.63.0/go.mod h1:VVFF/fBIoToEnWRVkYoXEkq3R3paCoxG9PXP74SnV18=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
github.com/rs/zerolog v1.15.0/go.mod h1:xYTKnLHcpfU2225ny5qZjxnj9NvkumZYjJHlAThCjNc=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sethvargo/go-retry v0.3.0 h1:EEt31A35QhrcRZtrYFDTBg91cqZVnFL2navjDrah2SE=
github.com/sethvargo/go-retry v0.3.0/go.mod h1:mNX17F0C/HguQMyMyJxcnU471gOZGxCLyYaFyAZraas=
github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
github.com/shopspring/decimal v1.2.0 h1:abSATXmQEYyShuxI4/vyW3tV1MrKAJzCZ/0zLUXYbsQ=
github.com/shopspring/decimal v1.2.0/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/wercker/journalhook v0.0.0-20230927020745-64542ffa4117 h1:67A5tweHp3C7osHjrYsy6pQZ00bYkTTttZ7kiOwwHeA=
github.com/wercker/journalhook v0.0.0-20230927020745-64542ffa4117/go.mod h1:XCsSkdKK4gwBMNrOCZWww0pX6AOt+2gYc5Z6jBRrNVg=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/zclconf/go-cty v1.16.2 h1:LAJSwc3v81IRBZyUVQDUdZ7hs3SYs9jv0eZJDWHD/70=
github.com/zclconf/go-cty v1.16.2/go.mod h1:VvMs5i0vgZdhYawQNq5kePSpLAoz8u1xvZgrPIxfnZE=
github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940 h1:4r45xpDWB6ZMSMNJFMOjqrGHynW3DIBuR2H9j0ug+Mo=
github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940/go.mod h1:CmBdvvj3nqzfzJ6nTCIwDTPZ56aVGvDrmztiO5g3qrM=
github.com/zclconf/go-cty-yaml v1.1.0 h1:nP+jp0qPHv2IhUVqmQSzjvqAWcObN0KBkUl2rWBdig0=
github.com/zclconf/go-cty-yaml v1.1.0/go.mod h1:9YLUH4g7lOhVWqUbctnVlZ5KLpg7JAprQNgxSZ1Gyxs=
github.com/stretchr/testify v1.7.1-0.20210427113832-6241f9ab9942 h1:t0lM6y/M5IiUZyvbBTcngso8SZEZICH7is9B6g/obVU=
github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4=
github.com/vmihailenco/tagparser v0.1.1/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI=
github.com/wercker/journalhook v0.0.0-20180428041537-5d0a5ae867b3 h1:shC1HB1UogxN5Ech3Yqaaxj1X/P656PPCB4RbojIJqc=
github.com/wercker/journalhook v0.0.0-20180428041537-5d0a5ae867b3/go.mod h1:XCsSkdKK4gwBMNrOCZWww0pX6AOt+2gYc5Z6jBRrNVg=
github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8=
github.com/zclconf/go-cty v1.8.0/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk=
github.com/zclconf/go-cty v1.10.0 h1:mp9ZXQeIcN8kAwuqorjH+Q+njbJKjLrvB2yIh4q7U+0=
github.com/zclconf/go-cty v1.10.0/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk=
github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8=
github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
@@ -221,6 +176,7 @@ go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
@@ -228,79 +184,50 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20201203163018-be400aefbc4c/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.20.0/go.mod h1:Xwo95rrVNIoSMx9wa1JroENMToLWn3RNVrTBpLHgZPQ=
golang.org/x/crypto v0.33.0 h1:IOBPskki6Lysi0lo9qQvbxiQ+FvsCC/YWOecCHAixus=
golang.org/x/crypto v0.33.0/go.mod h1:bVdXmD7IV/4GdElGPozy6U7lWdRXA4qyRVGJV57uQ5M=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/crypto v0.0.0-20220507011949-2cf3adece122 h1:NvGWuYG8dkDHFSKksI1P9faiVJ9rayE6l0+ouWVIDs8=
golang.org/x/crypto v0.0.0-20220507011949-2cf3adece122/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.23.0 h1:Zb7khfcRGKk+kqfxFaP5tZqCnDZMjC5VtUBs87Hr6QM=
golang.org/x/mod v0.23.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/mod v0.24.0 h1:ZfthKaKaT4NrhGVZHO1/WDTwGES4De8KtWO0SIbNJMU=
golang.org/x/mod v0.24.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/mod v0.5.1 h1:OJxoQ/rynoF0dcCdI7cLPktw/hR2cueqYfjm43oqK38=
golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6 h1:nonptSpoQ4vQjyraW20DXPAglgQfVnM9ZC6MmNLMR60=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.22.0 h1:bofq7m3/HAFvbF51jz3Q9wLg3jkvSPuiZu/pD1XwgtM=
golang.org/x/text v0.22.0/go.mod h1:YRoo4H8PVmsu+E3Ou7cqLVH8oXWIHVoX0jqUWALQhfY=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190425163242-31fd60d6bfdc/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
@@ -308,33 +235,24 @@ golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20190823170909-c4a336ef6a2f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.30.0 h1:BgcpHewrV5AUp2G9MebG4XPFI1E2W41zU1SaqVA9vJY=
golang.org/x/tools v0.30.0/go.mod h1:c347cR/OJfw5TI+GfX7RUPNMdDRRbjvYTS0jPyvsVtY=
golang.org/x/tools v0.31.0 h1:0EedkvKDbh+qistFTd0Bcwe/YLh4vHwWEkiI0toFIBU=
golang.org/x/tools v0.31.0/go.mod h1:naFTU+Cev749tSJRXJlna0T3WxKvb1kWEx15xA4SdmQ=
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.36.1 h1:yBPeRvTftaleIgM3PZ/WBIZ7XM/eEYAaEyCwvyjq/gk=
google.golang.org/protobuf v1.36.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/inconshreveable/log15.v2 v2.0.0-20180818164646-67afb5ed74ec/go.mod h1:aPpfJ7XW+gOuirDoZ8gHhLh3kZ1B08FtV2bbmy7Jv3s=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
lukechampine.com/blake3 v1.1.7 h1:GgRMhmdsuK8+ii6UZFDL8Nb+VyMwadAgcJyfYHxG6n0=
lukechampine.com/blake3 v1.1.7/go.mod h1:tkKEOtDkNtklkXtLNEOGNq5tcV90tJiA1vAA12R78LA=

View File

@@ -1,355 +0,0 @@
package main
import (
"context"
log "github.com/sirupsen/logrus"
"os"
"os/exec"
"path/filepath"
"somegit.dev/ALHP/ALHP.GO/ent"
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
"strings"
"sync"
"time"
)
func housekeeping(ctx context.Context, repo, march string, wg *sync.WaitGroup) error {
defer wg.Done()
fullRepo := repo + "-" + march
log.Debugf("[%s] start housekeeping", fullRepo)
packages, err := Glob(filepath.Join(conf.Basedir.Repo, fullRepo, "/**/*.pkg.tar.zst"))
if err != nil {
return err
}
log.Debugf("[HK/%s] removing orphans, signature check", fullRepo)
for _, path := range packages {
mPackage := Package(path)
dbPkg, err := mPackage.DBPackage(ctx, db)
if ent.IsNotFound(err) {
log.Infof("[HK] removing orphan %s->%s", fullRepo, filepath.Base(path))
pkg := &ProtoPackage{
FullRepo: *mPackage.FullRepo(),
PkgFiles: []string{path},
March: *mPackage.MArch(),
}
buildManager.repoPurge[pkg.FullRepo] <- []*ProtoPackage{pkg}
continue
} else if err != nil {
log.Warningf("[HK] error fetching %s->%q from db: %v", fullRepo, path, err)
continue
}
pkg := &ProtoPackage{
Pkgbase: dbPkg.Pkgbase,
Repo: mPackage.Repo(),
FullRepo: *mPackage.FullRepo(),
DBPackage: dbPkg,
March: *mPackage.MArch(),
Arch: *mPackage.Arch(),
}
// check if package is still part of repo
dbs, err := alpmHandle.SyncDBs()
if err != nil {
return err
}
buildManager.alpmMutex.Lock()
pkgResolved, err := dbs.FindSatisfier(mPackage.Name())
buildManager.alpmMutex.Unlock()
if err != nil ||
pkgResolved.DB().Name() != pkg.DBPackage.Repository.String() ||
pkgResolved.DB().Name() != pkg.Repo.String() ||
pkgResolved.Architecture() != pkg.Arch ||
pkgResolved.Name() != mPackage.Name() ||
MatchGlobList(pkg.Pkgbase, conf.Blacklist.Packages) {
switch {
case err != nil:
log.Infof("[HK] %s->%s not included in repo (resolve error: %v)", pkg.FullRepo, mPackage.Name(), err)
case pkgResolved.DB().Name() != pkg.DBPackage.Repository.String():
log.Infof("[HK] %s->%s not included in repo (repo mismatch: repo:%s != db:%s)", pkg.FullRepo,
mPackage.Name(), pkgResolved.DB().Name(), pkg.DBPackage.Repository.String())
case pkgResolved.DB().Name() != pkg.Repo.String():
log.Infof("[HK] %s->%s not included in repo (repo mismatch: repo:%s != pkg:%s)", pkg.FullRepo,
mPackage.Name(), pkgResolved.DB().Name(), pkg.Repo.String())
case pkgResolved.Architecture() != pkg.Arch:
log.Infof("[HK] %s->%s not included in repo (arch mismatch: repo:%s != pkg:%s)", pkg.FullRepo,
mPackage.Name(), pkgResolved.Architecture(), pkg.Arch)
case pkgResolved.Name() != mPackage.Name():
log.Infof("[HK] %s->%s not included in repo (name mismatch: repo:%s != pkg:%s)", pkg.FullRepo,
mPackage.Name(), pkgResolved.Name(), mPackage.Name())
case MatchGlobList(pkg.Pkgbase, conf.Blacklist.Packages):
log.Infof("[HK] %s->%s not included in repo (blacklisted pkgbase %s)", pkg.FullRepo, mPackage.Name(), pkg.Pkgbase)
}
// package not found on mirror/db -> not part of any repo anymore
err = pkg.findPkgFiles()
if err != nil {
log.Errorf("[HK] %s->%s unable to get pkg-files: %v", pkg.FullRepo, mPackage.Name(), err)
continue
}
err = db.DBPackage.DeleteOne(pkg.DBPackage).Exec(ctx)
pkg.DBPackage = nil
buildManager.repoPurge[pkg.FullRepo] <- []*ProtoPackage{pkg}
if err != nil {
return err
}
continue
}
if pkg.DBPackage.LastVerified.Before(pkg.DBPackage.BuildTimeStart) {
err := pkg.DBPackage.Update().SetLastVerified(time.Now().UTC()).Exec(ctx)
if err != nil {
return err
}
// check if pkg signature is valid
valid, err := mPackage.HasValidSignature()
if err != nil {
return err
}
if !valid {
log.Infof("[HK] %s->%s invalid package signature", pkg.FullRepo, pkg.Pkgbase)
buildManager.repoPurge[pkg.FullRepo] <- []*ProtoPackage{pkg}
continue
}
}
// compare db-version with repo version
repoVer, err := pkg.repoVersion()
if err == nil && repoVer != dbPkg.RepoVersion {
log.Infof("[HK] %s->%s update repoVersion %s->%s", pkg.FullRepo, pkg.Pkgbase, dbPkg.RepoVersion, repoVer)
pkg.DBPackage, err = pkg.DBPackage.Update().SetRepoVersion(repoVer).ClearTagRev().Save(context.Background())
if err != nil {
return err
}
}
}
// check all packages from db for existence
dbPackages, err := db.DBPackage.Query().Where(
dbpackage.And(
dbpackage.RepositoryEQ(dbpackage.Repository(repo)),
dbpackage.March(march),
)).All(context.Background())
if err != nil {
return err
}
log.Debugf("[HK/%s] checking %d packages from database", fullRepo, len(dbPackages))
for _, dbPkg := range dbPackages {
pkg := &ProtoPackage{
Pkgbase: dbPkg.Pkgbase,
Repo: dbPkg.Repository,
March: dbPkg.March,
FullRepo: dbPkg.Repository.String() + "-" + dbPkg.March,
DBPackage: dbPkg,
}
if !pkg.isAvailable(ctx, alpmHandle) {
log.Infof("[HK] %s->%s not found on mirror, removing", pkg.FullRepo, pkg.Pkgbase)
err = db.DBPackage.DeleteOne(dbPkg).Exec(context.Background())
if err != nil {
log.Errorf("[HK] error deleting package %s->%s: %v", pkg.FullRepo, dbPkg.Pkgbase, err)
}
continue
}
switch {
case dbPkg.Status == dbpackage.StatusLatest && dbPkg.RepoVersion != "":
// check lastVersionBuild
if dbPkg.LastVersionBuild != dbPkg.RepoVersion {
log.Infof("[HK] %s->%s updating lastVersionBuild %s -> %s", fullRepo, dbPkg.Pkgbase, dbPkg.LastVersionBuild, dbPkg.RepoVersion)
nDBPkg, err := dbPkg.Update().SetLastVersionBuild(dbPkg.RepoVersion).Save(ctx)
if err != nil {
log.Warningf("[HK] error updating lastVersionBuild for %s->%s: %v", fullRepo, dbPkg.Pkgbase, err)
} else {
dbPkg = nDBPkg
}
}
var existingSplits []string
var missingSplits []string
for _, splitPkg := range dbPkg.Packages {
pkgFile := filepath.Join(conf.Basedir.Repo, fullRepo, "os", conf.Arch,
splitPkg+"-"+dbPkg.RepoVersion+"-"+conf.Arch+".pkg.tar.zst")
_, err = os.Stat(pkgFile)
switch {
case os.IsNotExist(err):
missingSplits = append(missingSplits, splitPkg)
case err != nil:
log.Warningf("[HK] error reading package-file %s: %v", splitPkg, err)
default:
existingSplits = append(existingSplits, pkgFile)
}
}
if len(missingSplits) > 0 {
log.Infof("[HK] %s->%s missing split-package(s): %s", fullRepo, dbPkg.Pkgbase, missingSplits)
pkg.DBPackage, err = pkg.DBPackage.Update().
ClearRepoVersion().
ClearTagRev().
SetStatus(dbpackage.StatusQueued).
Save(ctx)
if err != nil {
return err
}
pkg := &ProtoPackage{
FullRepo: fullRepo,
PkgFiles: existingSplits,
March: march,
DBPackage: dbPkg,
}
buildManager.repoPurge[fullRepo] <- []*ProtoPackage{pkg}
}
rawState, err := os.ReadFile(filepath.Join(conf.Basedir.Work, stateDir, dbPkg.Repository.String()+"-"+conf.Arch, dbPkg.Pkgbase))
if err != nil {
log.Infof("[HK] state not found for %s->%s: %v, removing package", fullRepo, dbPkg.Pkgbase, err)
pkg := &ProtoPackage{
FullRepo: fullRepo,
PkgFiles: existingSplits,
March: march,
DBPackage: dbPkg,
}
buildManager.repoPurge[fullRepo] <- []*ProtoPackage{pkg}
continue
}
state, err := parseState(string(rawState))
if err != nil {
log.Warningf("[HK] error parsing state file for %s->%s: %v", fullRepo, dbPkg.Pkgbase, err)
continue
}
if dbPkg.TagRev != nil && state.TagRev == *dbPkg.TagRev && state.PkgVer != dbPkg.Version {
log.Infof("[HK] reseting package %s->%s with mismatched state information (%s!=%s)",
fullRepo, dbPkg.Pkgbase, state.PkgVer, dbPkg.Version)
err = dbPkg.Update().SetStatus(dbpackage.StatusQueued).ClearTagRev().Exec(ctx)
if err != nil {
return err
}
}
case dbPkg.Status == dbpackage.StatusLatest && dbPkg.RepoVersion == "":
log.Infof("[HK] reseting missing package %s->%s with no repo version", fullRepo, dbPkg.Pkgbase)
err = dbPkg.Update().SetStatus(dbpackage.StatusQueued).ClearTagRev().ClearRepoVersion().Exec(ctx)
if err != nil {
return err
}
case dbPkg.Status == dbpackage.StatusSkipped && dbPkg.RepoVersion != "" && !strings.HasPrefix(dbPkg.SkipReason, "delayed"):
log.Infof("[HK] delete skipped package %s->%s", fullRepo, dbPkg.Pkgbase)
pkg := &ProtoPackage{
FullRepo: fullRepo,
March: march,
DBPackage: dbPkg,
}
buildManager.repoPurge[fullRepo] <- []*ProtoPackage{pkg}
case dbPkg.Status == dbpackage.StatusSkipped && dbPkg.SkipReason == "blacklisted" && !MatchGlobList(pkg.Pkgbase, conf.Blacklist.Packages):
log.Infof("[HK] requeue previously blacklisted package %s->%s", fullRepo, dbPkg.Pkgbase)
err = dbPkg.Update().SetStatus(dbpackage.StatusQueued).ClearSkipReason().ClearTagRev().Exec(context.Background())
if err != nil {
return err
}
case dbPkg.Status == dbpackage.StatusFailed && dbPkg.RepoVersion != "":
log.Infof("[HK] package %s->%s failed but still present in repo, removing", fullRepo, dbPkg.Pkgbase)
pkg := &ProtoPackage{
FullRepo: fullRepo,
March: march,
DBPackage: dbPkg,
}
buildManager.repoPurge[fullRepo] <- []*ProtoPackage{pkg}
}
}
log.Debugf("[HK/%s] all tasks finished", fullRepo)
return nil
}
func logHK(ctx context.Context) error {
// check if package for log exists and if error can be fixed by rebuild
logFiles, err := Glob(filepath.Join(conf.Basedir.Repo, logDir, "/**/*.log"))
if err != nil {
return err
}
for _, logFile := range logFiles {
pathSplit := strings.Split(logFile, string(filepath.Separator))
extSplit := strings.Split(filepath.Base(logFile), ".")
pkgbase := strings.Join(extSplit[:len(extSplit)-1], ".")
march := pathSplit[len(pathSplit)-2]
pkg := ProtoPackage{
Pkgbase: pkgbase,
March: march,
}
if exists, err := pkg.exists(); err != nil {
return err
} else if !exists {
_ = os.Remove(logFile)
continue
}
pkgSkipped, err := db.DBPackage.Query().Where(
dbpackage.Pkgbase(pkg.Pkgbase),
dbpackage.March(pkg.March),
dbpackage.StatusEQ(dbpackage.StatusSkipped),
).Exist(ctx)
if err != nil {
return err
}
if pkgSkipped {
_ = os.Remove(logFile)
continue
}
logContent, err := os.ReadFile(logFile)
if err != nil {
return err
}
sLogContent := string(logContent)
if rePortError.MatchString(sLogContent) || reSigError.MatchString(sLogContent) || reDownloadError.MatchString(sLogContent) ||
reDownloadError2.MatchString(sLogContent) {
rows, err := db.DBPackage.Update().Where(dbpackage.Pkgbase(pkg.Pkgbase), dbpackage.March(pkg.March),
dbpackage.StatusEQ(dbpackage.StatusFailed)).ClearTagRev().SetStatus(dbpackage.StatusQueued).Save(ctx)
if err != nil {
return err
}
if rows > 0 {
log.Infof("[HK/%s/%s] fixable build-error detected, requeueing package (%d)", pkg.March, pkg.Pkgbase, rows)
}
} else if reLdError.MatchString(sLogContent) || reRustLTOError.MatchString(sLogContent) {
rows, err := db.DBPackage.Update().Where(
dbpackage.Pkgbase(pkg.Pkgbase),
dbpackage.March(pkg.March),
dbpackage.StatusEQ(dbpackage.StatusFailed),
dbpackage.LtoNotIn(dbpackage.LtoAutoDisabled, dbpackage.LtoDisabled),
).ClearTagRev().SetStatus(dbpackage.StatusQueued).SetLto(dbpackage.LtoAutoDisabled).Save(ctx)
if err != nil {
return err
}
if rows > 0 {
log.Infof("[HK/%s/%s] fixable build-error detected (linker-error), requeueing package (%d)", pkg.March, pkg.Pkgbase, rows)
}
}
}
return nil
}
func debugHK() {
for _, march := range conf.March {
if _, err := os.Stat(filepath.Join(conf.Basedir.Debug, march)); err == nil {
log.Debugf("[DHK/%s] start cleanup debug packages", march)
cleanCmd := exec.Command("paccache", "-rc", filepath.Join(conf.Basedir.Debug, march), "-k", "1")
res, err := cleanCmd.CombinedOutput()
if err != nil {
log.Warningf("[DHK/%s] cleanup debug packages failed: %v (%s)", march, err, string(res))
}
}
}
}

504
main.go
View File

@@ -5,18 +5,26 @@ import (
"entgo.io/ent/dialect"
"entgo.io/ent/dialect/sql"
"flag"
"fmt"
"git.harting.dev/ALHP/ALHP.GO/ent"
"git.harting.dev/ALHP/ALHP.GO/ent/dbpackage"
"git.harting.dev/ALHP/ALHP.GO/ent/migrate"
"github.com/Jguer/go-alpm/v2"
_ "github.com/jackc/pgx/v4/stdlib"
log "github.com/sirupsen/logrus"
"github.com/wercker/journalhook"
"golang.org/x/sync/semaphore"
"gopkg.in/yaml.v2"
"html/template"
"math"
"os"
"os/exec"
"os/signal"
"path/filepath"
"somegit.dev/ALHP/ALHP.GO/ent"
"somegit.dev/ALHP/ALHP.GO/ent/migrate"
"strings"
"sync"
"syscall"
"time"
)
var (
@@ -27,9 +35,432 @@ var (
db *ent.Client
journalLog = flag.Bool("journal", false, "Log to systemd journal instead of stdout")
checkInterval = flag.Int("interval", 5, "How often svn2git should be checked in minutes (default: 5)")
configFile = flag.String("config", "config.yaml", "set config file name/path")
)
func (b *BuildManager) htmlWorker(ctx context.Context) {
type Pkg struct {
Pkgbase string
Status string
Class string
Skip string
Version string
Svn2GitVersion string
BuildDate string
BuildDuration time.Duration
Checked string
Log string
LTO bool
LTOUnknown bool
LTODisabled bool
LTOAutoDisabled bool
DebugSym bool
DebugSymNotAvailable bool
DebugSymUnknown bool
}
type Repo struct {
Name string
Packages []Pkg
}
type March struct {
Name string
Repos []Repo
}
type tpl struct {
March []March
Generated string
Latest int
Failed int
Skipped int
Queued int
LTOEnabled int
LTOUnknown int
LTODisabled int
}
for {
gen := &tpl{}
for _, march := range conf.March {
addMarch := March{
Name: march,
}
for _, repo := range conf.Repos {
addRepo := Repo{
Name: repo,
}
pkgs := db.DbPackage.Query().Order(ent.Asc(dbpackage.FieldPkgbase)).Where(dbpackage.MarchEQ(march), dbpackage.RepositoryEQ(dbpackage.Repository(repo))).AllX(ctx)
for _, pkg := range pkgs {
addPkg := Pkg{
Pkgbase: pkg.Pkgbase,
Status: strings.ToUpper(pkg.Status.String()),
Class: statusId2string(pkg.Status),
Skip: pkg.SkipReason,
Version: pkg.RepoVersion,
Svn2GitVersion: pkg.Version,
}
if pkg.STime != nil && pkg.UTime != nil {
addPkg.BuildDuration = time.Duration(*pkg.STime+*pkg.UTime) * time.Second
}
if !pkg.BuildTimeStart.IsZero() {
addPkg.BuildDate = pkg.BuildTimeStart.UTC().Format(time.RFC1123)
}
if !pkg.Updated.IsZero() {
addPkg.Checked = pkg.Updated.UTC().Format(time.RFC1123)
}
if pkg.Status == dbpackage.StatusFailed {
addPkg.Log = fmt.Sprintf("%s/%s/%s.log", logDir, pkg.March, pkg.Pkgbase)
}
switch pkg.Lto {
case dbpackage.LtoUnknown:
if pkg.Status != dbpackage.StatusSkipped && pkg.Status != dbpackage.StatusFailed {
addPkg.LTOUnknown = true
}
case dbpackage.LtoEnabled:
addPkg.LTO = true
case dbpackage.LtoDisabled:
addPkg.LTODisabled = true
case dbpackage.LtoAutoDisabled:
addPkg.LTOAutoDisabled = true
}
switch pkg.DebugSymbols {
case dbpackage.DebugSymbolsUnknown:
if pkg.Status != dbpackage.StatusSkipped && pkg.Status != dbpackage.StatusFailed {
addPkg.DebugSymUnknown = true
}
case dbpackage.DebugSymbolsAvailable:
addPkg.DebugSym = true
case dbpackage.DebugSymbolsNotAvailable:
addPkg.DebugSymNotAvailable = true
}
addRepo.Packages = append(addRepo.Packages, addPkg)
}
addMarch.Repos = append(addMarch.Repos, addRepo)
}
gen.March = append(gen.March, addMarch)
}
gen.Generated = time.Now().UTC().Format(time.RFC1123)
var v []struct {
Status dbpackage.Status `json:"status"`
Count int `json:"count"`
}
db.DbPackage.Query().GroupBy(dbpackage.FieldStatus).Aggregate(ent.Count()).ScanX(ctx, &v)
for _, c := range v {
switch c.Status {
case dbpackage.StatusFailed:
gen.Failed = c.Count
case dbpackage.StatusSkipped:
gen.Skipped = c.Count
case dbpackage.StatusLatest:
gen.Latest = c.Count
case dbpackage.StatusQueued:
gen.Queued = c.Count
}
}
var v2 []struct {
Status dbpackage.Lto `json:"lto"`
Count int `json:"count"`
}
db.DbPackage.Query().Where(dbpackage.StatusNEQ(dbpackage.StatusSkipped)).GroupBy(dbpackage.FieldLto).Aggregate(ent.Count()).ScanX(ctx, &v2)
for _, c := range v2 {
switch c.Status {
case dbpackage.LtoUnknown:
gen.LTOUnknown = c.Count
case dbpackage.LtoDisabled, dbpackage.LtoAutoDisabled:
gen.LTODisabled += c.Count
case dbpackage.LtoEnabled:
gen.LTOEnabled = c.Count
}
}
statusTpl, err := template.ParseFiles("tpl/packages.html")
if err != nil {
log.Warningf("[HTML] Error parsing template file: %v", err)
continue
}
f, err := os.OpenFile(filepath.Join(conf.Basedir.Repo, "packages.html"), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
if err != nil {
log.Warningf("[HTML] Erro ropening output file: %v", err)
continue
}
err = statusTpl.Execute(f, gen)
if err != nil {
log.Warningf("[HTML] Error filling template: %v", err)
}
_ = f.Close()
time.Sleep(time.Minute * 5)
}
}
func (b *BuildManager) repoWorker(repo string) {
for {
select {
case pkgL := <-b.repoAdd[repo]:
toAdd := make([]string, 0)
for _, pkg := range pkgL {
toAdd = append(toAdd, pkg.PkgFiles...)
}
args := []string{"-s", "-v", "-p", "-n", filepath.Join(conf.Basedir.Repo, repo, "os", conf.Arch, repo) + ".db.tar.xz"}
args = append(args, toAdd...)
cmd := exec.Command("repo-add", args...)
res, err := cmd.CombinedOutput()
log.Debug(string(res))
if err != nil && cmd.ProcessState.ExitCode() != 1 {
log.Panicf("%s while repo-add: %v", string(res), err)
}
for _, pkg := range pkgL {
pkg.toDbPackage(true)
if _, err := os.Stat(filepath.Join(conf.Basedir.Debug, pkg.March, pkg.DbPackage.Packages[0]+"-debug-"+pkg.Version+"-"+conf.Arch+".pkg.tar.zst")); err == nil {
pkg.DbPackage = pkg.DbPackage.Update().
SetStatus(dbpackage.StatusLatest).
ClearSkipReason().
SetDebugSymbols(dbpackage.DebugSymbolsAvailable).
SetRepoVersion(pkg.Version).
SetHash(pkg.Hash).
SaveX(context.Background())
} else {
pkg.DbPackage = pkg.DbPackage.Update().
SetStatus(dbpackage.StatusLatest).
ClearSkipReason().
SetDebugSymbols(dbpackage.DebugSymbolsNotAvailable).
SetRepoVersion(pkg.Version).
SetHash(pkg.Hash).
SaveX(context.Background())
}
}
cmd = exec.Command("paccache", "-rc", filepath.Join(conf.Basedir.Repo, repo, "os", conf.Arch), "-k", "1")
res, err = cmd.CombinedOutput()
log.Debug(string(res))
if err != nil {
log.Warningf("Error running paccache: %v", err)
}
err = updateLastUpdated()
if err != nil {
log.Warningf("Error updating lastupdate: %v", err)
}
case pkgL := <-b.repoPurge[repo]:
for _, pkg := range pkgL {
if _, err := os.Stat(filepath.Join(conf.Basedir.Repo, pkg.FullRepo, "os", conf.Arch, pkg.FullRepo) + ".db.tar.xz"); err != nil {
continue
}
if len(pkg.PkgFiles) == 0 {
if err := pkg.findPkgFiles(); err != nil {
log.Warningf("[%s/%s] Unable to find files: %v", pkg.FullRepo, pkg.Pkgbase, err)
continue
} else if len(pkg.PkgFiles) == 0 {
continue
}
}
var realPkgs []string
for _, filePath := range pkg.PkgFiles {
realPkgs = append(realPkgs, Package(filePath).Name())
}
b.repoWG.Add(1)
args := []string{"-s", "-v", filepath.Join(conf.Basedir.Repo, pkg.FullRepo, "os", conf.Arch, pkg.FullRepo) + ".db.tar.xz"}
args = append(args, realPkgs...)
cmd := exec.Command("repo-remove", args...)
res, err := cmd.CombinedOutput()
log.Debug(string(res))
if err != nil && cmd.ProcessState.ExitCode() == 1 {
log.Warningf("Error while deleting package %s: %s", pkg.Pkgbase, string(res))
}
if pkg.DbPackage != nil {
_ = pkg.DbPackage.Update().ClearRepoVersion().Exec(context.Background())
}
for _, file := range pkg.PkgFiles {
_ = os.Remove(file)
_ = os.Remove(file + ".sig")
}
err = updateLastUpdated()
if err != nil {
log.Warningf("Error updating lastupdate: %v", err)
}
b.repoWG.Done()
}
}
}
}
func (b *BuildManager) syncWorker(ctx context.Context) error {
err := os.MkdirAll(filepath.Join(conf.Basedir.Work, upstreamDir), 0755)
if err != nil {
log.Fatalf("Error creating upstream dir: %v", err)
}
for {
for gitDir, gitURL := range conf.Svn2git {
gitPath := filepath.Join(conf.Basedir.Work, upstreamDir, gitDir)
if _, err := os.Stat(gitPath); os.IsNotExist(err) {
cmd := exec.Command("git", "clone", "--depth=1", gitURL, gitPath)
res, err := cmd.CombinedOutput()
log.Debug(string(res))
if err != nil {
log.Fatalf("Error running git clone: %v", err)
}
} else if err == nil {
cmd := exec.Command("git", "reset", "--hard")
cmd.Dir = gitPath
res, err := cmd.CombinedOutput()
log.Debug(string(res))
if err != nil {
log.Fatalf("Error running git reset: %v", err)
}
cmd = exec.Command("git", "pull")
cmd.Dir = gitPath
res, err = cmd.CombinedOutput()
log.Debug(string(res))
if err != nil {
log.Warningf("Failed to update git repo %s: %v", gitDir, err)
}
}
}
// housekeeping
wg := new(sync.WaitGroup)
for _, repo := range repos {
wg.Add(1)
splitRepo := strings.Split(repo, "-")
repo := repo
go func() {
err := housekeeping(splitRepo[0], strings.Join(splitRepo[1:], "-"), wg)
if err != nil {
log.Warningf("[%s] housekeeping failed: %v", repo, err)
}
}()
}
wg.Wait()
err := logHK()
if err != nil {
log.Warningf("log-housekeeping failed: %v", err)
}
// fetch updates between sync runs
b.alpmMutex.Lock()
err = alpmHandle.Release()
if err != nil {
log.Fatalf("Error releasing ALPM handle: %v", err)
}
err = setupChroot()
for err != nil {
log.Warningf("Unable to upgrade chroot, trying again later.")
time.Sleep(time.Minute)
err = setupChroot()
}
alpmHandle, err = initALPM(filepath.Join(conf.Basedir.Work, chrootDir, pristineChroot), filepath.Join(conf.Basedir.Work, chrootDir, pristineChroot, "/var/lib/pacman"))
if err != nil {
log.Warningf("Error while ALPM-init: %v", err)
}
b.alpmMutex.Unlock()
queue, err := b.queue(filepath.Join(conf.Basedir.Work, upstreamDir, "/**/PKGBUILD"))
if err != nil {
log.Warningf("Error building buildQueue: %v", err)
} else {
log.Debugf("buildQueue with %d items", len(queue))
var fastQueue []*ProtoPackage
var slowQueue []*ProtoPackage
maxDiff := 0.0
cutOff := 0.0
for i := 0; i < len(queue); i++ {
if i+1 < len(queue) {
if math.Abs(queue[i].Priority()-queue[i+1].Priority()) > maxDiff {
maxDiff = math.Abs(queue[i].Priority() - queue[i+1].Priority())
cutOff = queue[i].Priority()
}
}
}
for _, pkg := range queue {
eligible, err := pkg.isEligible(ctx)
if err != nil {
log.Infof("Unable to determine status for package %s: %v", pkg.Pkgbase, err)
b.repoPurge[pkg.FullRepo] <- []*ProtoPackage{pkg}
continue
}
if !eligible {
log.Debugf("skipped package %s (%v)", pkg.Pkgbase, err)
b.repoPurge[pkg.FullRepo] <- []*ProtoPackage{pkg}
continue
}
if pkg.Priority() > cutOff && cutOff >= conf.Build.SlowQueueThreshold {
slowQueue = append(slowQueue, pkg)
} else {
fastQueue = append(fastQueue, pkg)
}
}
if len(fastQueue) > 0 && len(slowQueue) > 0 {
log.Infof("Skipping slowQueue=%d in favor of fastQueue=%d", len(slowQueue), len(fastQueue))
slowQueue = []*ProtoPackage{}
}
err = b.buildQueue(fastQueue, ctx)
if err != nil {
return err
}
err = b.buildQueue(slowQueue, ctx)
if err != nil {
return err
}
if err := b.sem.Acquire(ctx, int64(conf.Build.Worker)); err != nil {
return err
}
b.sem.Release(int64(conf.Build.Worker))
}
if ctx.Err() == nil {
for _, repo := range repos {
err = movePackagesLive(repo)
if err != nil {
log.Errorf("[%s] Error moving packages live: %v", repo, err)
}
}
} else {
return ctx.Err()
}
log.Debugf("build-cycle finished")
time.Sleep(time.Duration(*checkInterval) * time.Minute)
}
}
func main() {
killSignals := make(chan os.Signal, 1)
signal.Notify(killSignals, syscall.SIGINT, syscall.SIGTERM)
@@ -39,19 +470,19 @@ func main() {
flag.Parse()
confStr, err := os.ReadFile(*configFile)
confStr, err := os.ReadFile("config.yaml")
if err != nil {
log.Fatalf("error reading config file: %v", err)
log.Fatalf("Error reading config file: %v", err)
}
err = yaml.Unmarshal(confStr, &conf)
if err != nil {
log.Fatalf("error parsing config file: %v", err)
log.Fatalf("Error parsing config file: %v", err)
}
lvl, err := log.ParseLevel(conf.Logging.Level)
if err != nil {
log.Fatalf("error parsing log level from config: %v", err)
log.Fatalf("Error parsing log level from config: %v", err)
}
log.SetLevel(lvl)
if *journalLog {
@@ -60,67 +491,62 @@ func main() {
err = syscall.Setpriority(syscall.PRIO_PROCESS, 0, 5)
if err != nil {
log.Infof("failed to drop priority: %v", err)
log.Warningf("Failed to drop priority: %v", err)
}
err = os.MkdirAll(conf.Basedir.Repo, 0o755)
err = os.MkdirAll(conf.Basedir.Repo, 0755)
if err != nil {
log.Fatalf("error creating repo dir: %v", err)
log.Fatalf("Error creating repo dir: %v", err)
}
if conf.DB.Driver == "pgx" {
pdb, err := sql.Open("pgx", conf.DB.ConnectTo)
if conf.Db.Driver == "pgx" {
pdb, err := sql.Open("pgx", conf.Db.ConnectTo)
if err != nil {
log.Fatalf("failed to open database %s: %v", conf.DB.ConnectTo, err)
log.Fatalf("Failed to open database %s: %v", conf.Db.ConnectTo, err)
}
drv := sql.OpenDB(dialect.Postgres, pdb.DB())
db = ent.NewClient(ent.Driver(drv))
} else {
db, err = ent.Open(conf.DB.Driver, conf.DB.ConnectTo)
db, err = ent.Open(conf.Db.Driver, conf.Db.ConnectTo)
if err != nil {
log.Panicf("failed to open database %s: %v", conf.DB.ConnectTo, err)
log.Panicf("Failed to open database %s: %v", conf.Db.ConnectTo, err)
}
defer func(Client *ent.Client) {
_ = Client.Close()
}(db)
}
ctx, cancel := context.WithCancel(context.Background())
if err := db.Schema.Create(ctx, migrate.WithDropIndex(true), migrate.WithDropColumn(true)); err != nil {
log.Panicf("automigrate failed: %v", err)
if err := db.Schema.Create(context.Background(), migrate.WithDropIndex(true), migrate.WithDropColumn(true)); err != nil {
log.Panicf("Automigrate failed: %v", err)
}
buildManager = &BuildManager{
repoPurge: make(map[string]chan []*ProtoPackage),
repoAdd: make(map[string]chan []*ProtoPackage),
queueSignal: make(chan struct{}),
alpmMutex: new(sync.RWMutex),
building: []*ProtoPackage{},
buildingLock: new(sync.RWMutex),
repoWG: new(sync.WaitGroup),
repoPurge: make(map[string]chan []*ProtoPackage),
repoAdd: make(map[string]chan []*ProtoPackage),
sem: semaphore.NewWeighted(int64(conf.Build.Worker)),
}
buildManager.setupMetrics(conf.Metrics.Port)
err = setupChroot(ctx)
err = setupChroot()
if err != nil {
log.Panicf("unable to setup chroot: %v", err)
log.Fatalf("Unable to setup chroot: %v", err)
}
err = syncMarchs(ctx)
err = syncMarchs()
if err != nil {
log.Panicf("error syncing marchs: %v", err)
log.Fatalf("Error syncing marchs: %v", err)
}
alpmHandle, err = initALPM(filepath.Join(conf.Basedir.Work, chrootDir, pristineChroot),
filepath.Join(conf.Basedir.Work, chrootDir, pristineChroot, "/var/lib/pacman"))
alpmHandle, err = initALPM(filepath.Join(conf.Basedir.Work, chrootDir, pristineChroot), filepath.Join(conf.Basedir.Work, chrootDir, pristineChroot, "/var/lib/pacman"))
if err != nil {
log.Panicf("error while ALPM-init: %v", err)
log.Fatalf("Error while ALPM-init: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
go func() {
_ = buildManager.syncWorker(ctx)
}()
go buildManager.htmlWorker(ctx)
killLoop:
for {
@@ -128,22 +554,22 @@ killLoop:
case <-killSignals:
break killLoop
case <-reloadSignals:
confStr, err := os.ReadFile(*configFile)
confStr, err := os.ReadFile("config.yaml")
if err != nil {
log.Panicf("unable to open config: %v", err)
log.Fatalf("Unable to open config: %v", err)
}
err = yaml.Unmarshal(confStr, &conf)
if err != nil {
log.Panicf("unable to parse config: %v", err)
log.Fatalf("Unable to parse config: %v", err)
}
lvl, err := log.ParseLevel(conf.Logging.Level)
if err != nil {
log.Panicf("failure setting logging level: %v", err)
log.Fatalf("Failure setting logging level: %v", err)
}
log.SetLevel(lvl)
log.Infof("config reloaded")
log.Infof("Config reloaded")
}
}

View File

@@ -1,26 +0,0 @@
package main
import (
"fmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus"
"net/http"
)
func (b *BuildManager) setupMetrics(port uint32) {
b.metrics.queueSize = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "build_queue_size",
Help: "Build queue size",
}, []string{"repository", "status"})
mux := http.NewServeMux()
mux.Handle("/", promhttp.Handler())
go func() {
err := http.ListenAndServe(fmt.Sprintf(":%d", port), mux) //nolint:gosec
if err != nil {
log.Errorf("failed to start metrics server: %v", err)
}
}()
}

View File

@@ -5,11 +5,11 @@ import (
"entgo.io/ent/dialect/sql"
"entgo.io/ent/dialect/sql/sqljson"
"fmt"
"git.harting.dev/ALHP/ALHP.GO/ent"
"git.harting.dev/ALHP/ALHP.GO/ent/dbpackage"
log "github.com/sirupsen/logrus"
"os/exec"
"path/filepath"
"somegit.dev/ALHP/ALHP.GO/ent"
"somegit.dev/ALHP/ALHP.GO/ent/dbpackage"
"strings"
)
@@ -22,10 +22,9 @@ func (pkg Package) Name() string {
}
// MArch returns package's march
func (pkg Package) MArch() *string {
func (pkg Package) MArch() string {
splitPath := strings.Split(string(pkg), string(filepath.Separator))
res := strings.Join(strings.Split(splitPath[len(splitPath)-4], "-")[1:], "-")
return &res
return strings.Join(strings.Split(splitPath[len(splitPath)-4], "-")[1:], "-")
}
// Repo returns package's dbpackage.Repository
@@ -35,9 +34,9 @@ func (pkg Package) Repo() dbpackage.Repository {
}
// FullRepo returns package's dbpackage.Repository-march
func (pkg Package) FullRepo() *string {
func (pkg Package) FullRepo() string {
splitPath := strings.Split(string(pkg), string(filepath.Separator))
return &splitPath[len(splitPath)-4]
return splitPath[len(splitPath)-4]
}
// Version returns version extracted from package
@@ -47,22 +46,21 @@ func (pkg Package) Version() string {
}
// Arch returns package's Architecture
func (pkg Package) Arch() *string {
func (pkg Package) Arch() string {
fNameSplit := strings.Split(filepath.Base(string(pkg)), "-")
fNameSplit = strings.Split(fNameSplit[len(fNameSplit)-1], ".")
return &fNameSplit[0]
return fNameSplit[0]
}
// HasValidSignature returns if package has valid detached signature file
func (pkg Package) HasValidSignature() (bool, error) {
cmd := exec.Command("gpg", "--verify", string(pkg)+".sig") //nolint:gosec
cmd := exec.Command("gpg", "--verify", string(pkg)+".sig")
res, err := cmd.CombinedOutput()
switch {
case cmd.ProcessState.ExitCode() == 2 || cmd.ProcessState.ExitCode() == 1:
if cmd.ProcessState.ExitCode() == 2 || cmd.ProcessState.ExitCode() == 1 {
return false, nil
case cmd.ProcessState.ExitCode() == 0:
} else if cmd.ProcessState.ExitCode() == 0 {
return true, nil
case err != nil:
} else if err != nil {
return false, fmt.Errorf("error checking signature: %w (%s)", err, res)
}
@@ -70,22 +68,22 @@ func (pkg Package) HasValidSignature() (bool, error) {
}
// DBPackage returns ent.DBPackage for package
func (pkg Package) DBPackage(ctx context.Context, db *ent.Client) (*ent.DBPackage, error) {
return pkg.DBPackageIsolated(ctx, *pkg.MArch(), pkg.Repo(), db)
func (pkg *Package) DBPackage(db *ent.Client) (*ent.DbPackage, error) {
return pkg.DBPackageIsolated(pkg.MArch(), pkg.Repo(), db)
}
// DBPackageIsolated returns ent.DBPackage like DBPackage, but not relying on the path for march and repo
func (pkg Package) DBPackageIsolated(ctx context.Context, march string, repo dbpackage.Repository, db *ent.Client) (*ent.DBPackage, error) {
dbPkg, err := db.DBPackage.Query().Where(func(s *sql.Selector) {
func (pkg *Package) DBPackageIsolated(march string, repo dbpackage.Repository, db *ent.Client) (*ent.DbPackage, error) {
dbPkg, err := db.DbPackage.Query().Where(func(s *sql.Selector) {
s.Where(
sql.And(
sqljson.ValueContains(dbpackage.FieldPackages, pkg.Name()),
sql.EQ(dbpackage.FieldMarch, march),
sql.EQ(dbpackage.FieldRepository, repo)),
)
}).Only(ctx)
}).Only(context.Background())
if ent.IsNotFound(err) {
log.Debugf("not found in database: %s", pkg.Name())
log.Debugf("Not found in database: %s", pkg.Name())
return nil, err
} else if err != nil {
return nil, err

25
pkgbuild.go Normal file
View File

@@ -0,0 +1,25 @@
package main
import (
"path/filepath"
"strings"
)
type PKGBUILD string
// FullRepo returns full-repo from PKGBUILD'S path
func (p PKGBUILD) FullRepo() string {
sPkgbuild := strings.Split(string(p), string(filepath.Separator))
return sPkgbuild[len(sPkgbuild)-2]
}
// Repo returns repo from PKGBUILD's path
func (p PKGBUILD) Repo() string {
return strings.Split(p.FullRepo(), "-")[0]
}
// PkgBase returns pkgbase from PKGBUILD's path
func (p PKGBUILD) PkgBase() string {
sPkgbuild := strings.Split(string(p), string(filepath.Separator))
return sPkgbuild[len(sPkgbuild)-4]
}

File diff suppressed because it is too large Load Diff

View File

@@ -51,55 +51,10 @@ package() {
# vim:set sw=2 et:
`
const PkgbuildTestWithPkgrelSub = `# Maintainer: Jan Alexander Steffens (heftig) <heftig@archlinux.org>
pkgname=gnome-todo
pkgver=41.0+r69+ga9a5b7cd
pkgrel=1.1
pkgdesc="Task manager for GNOME"
url="https://wiki.gnome.org/Apps/Todo"
arch=(x86_64)
license=(GPL)
depends=(evolution-data-server libpeas python gtk4 libportal-gtk4 libadwaita)
makedepends=(gobject-introspection appstream-glib git meson yelp-tools)
groups=(gnome-extra)
_commit=a9a5b7cdde0244331d2d49220f04018be60c018e # master
source=("git+https://gitlab.gnome.org/GNOME/gnome-todo.git#commit=$_commit")
sha256sums=('SKIP')
pkgver() {
cd $pkgname
git describe --tags | sed 's/^GNOME_TODO_//;s/_/./g;s/[^-]*-g/r&/;s/-/+/g'
}
prepare() {
cd $pkgname
}
build() {
arch-meson $pkgname build
meson compile -C build
}
check() (
glib-compile-schemas "${GSETTINGS_SCHEMA_DIR:=$PWD/$pkgname/data}"
export GSETTINGS_SCHEMA_DIR
meson test -C build --print-errorlogs
)
package() {
meson install -C build --destdir "$pkgdir"
}
# vim:set sw=2 et:
`
func TestIncreasePkgRel(t *testing.T) { //nolint:paralleltest
pkgbuild, err := os.CreateTemp(t.TempDir(), "")
func TestIncreasePkgRel(t *testing.T) {
pkgbuild, err := os.CreateTemp("", "")
if err != nil {
t.Fatal("unable to setup temp. PKGBUILD")
t.Fatal("Unable to setup temp. PKGBUILD")
}
defer func(name string) {
_ = os.Remove(name)
@@ -107,7 +62,7 @@ func TestIncreasePkgRel(t *testing.T) { //nolint:paralleltest
_, err = pkgbuild.WriteString(PkgbuildTest)
if err != nil {
t.Fatal("unable to write to temp. PKGBUILD")
t.Fatal("Unable to write to temp. PKGBUILD")
}
_ = pkgbuild.Close()
@@ -140,48 +95,3 @@ func TestIncreasePkgRel(t *testing.T) { //nolint:paralleltest
t.Fail()
}
}
func TestIncreasePkgRelWithPkgSub(t *testing.T) { //nolint:paralleltest
pkgbuild, err := os.CreateTemp(t.TempDir(), "")
if err != nil {
t.Fatal("unable to setup temp. PKGBUILD")
}
defer func(name string) {
_ = os.Remove(name)
}(pkgbuild.Name())
_, err = pkgbuild.WriteString(PkgbuildTestWithPkgrelSub)
if err != nil {
t.Fatal("unable to write to temp. PKGBUILD")
}
_ = pkgbuild.Close()
buildPkg := &ProtoPackage{
Pkgbase: "gnome-todo",
Pkgbuild: pkgbuild.Name(),
}
err = buildPkg.increasePkgRel(1)
if err != nil {
t.Logf("increasePkgRel: %v", err)
t.Fail()
}
versionSplit := strings.Split(buildPkg.Version, "-")
if versionSplit[len(versionSplit)-1] != "1.2" {
t.Logf("increasePkgRel: expected 1.2 pkgrel, got: %s", buildPkg.Version)
t.Fail()
}
buildPkg.Srcinfo = nil
err = buildPkg.genSrcinfo()
if err != nil {
t.Logf("increasePkgRel: %v", err)
t.Fail()
}
if buildPkg.Srcinfo.Pkgrel != "1.2" {
t.Logf("increasePkgRel: expected 1.2 pkgrel, got: %s", buildPkg.Srcinfo.Pkgrel)
t.Fail()
}
}

180
tpl/packages.html Normal file
View File

@@ -0,0 +1,180 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta content="width=device-width, initial-scale=1" name="viewport">
<title>ALHP Status</title>
<meta content="dark light" name="color-scheme">
<link crossorigin="anonymous" href="https://cdn.jsdelivr.net/npm/bootstrap-dark-5@1.1.3/dist/css/bootstrap-dark.css"
integrity="sha256-jtwIepgD1ro9ko1W5a6PAGr8IUIXA3FqBZPAXNYVREE=" rel="stylesheet">
<link crossorigin="anonymous" href="https://cdn.jsdelivr.net/npm/fork-awesome@1.2.0/css/fork-awesome.min.css"
integrity="sha256-XoaMnoYC5TH6/+ihMEnospgm0J1PM/nioxbOUdnM8HY=" rel="stylesheet">
<style>
.accordion:last-child {
padding-bottom: 8vh;
}
.info-box {
overflow: hidden;
white-space: nowrap;
}
</style>
</head>
<body>
<nav class="navbar navbar-expand-lg sticky-top navbar-light bg-info">
<div class="container">
<div class="d-flex justify-content-start">
<span class="navbar-brand align-middle">ALHP Status</span>
<span class="navbar-text">
<a class="align-middle" href="https://git.harting.dev/anonfunc/ALHP.GO"><i
class="fa fa-gitea fs-4"></i></a>
</span>
</div>
<div class="d-flex justify-content-end">
<input type="search" placeholder="Search for packages.." class="form-control" id="table-sort-input"
title="Search for package"/>
</div>
</div>
</nav>
<div class="container">
{{range $march := .March}}
<h3 class="mt-5">{{$march.Name}}</h3>
<div class="accordion" id="accordion-{{$march.Name}}">
{{range $repo := $march.Repos}}
<div class="accordion-item bg-opacity-25">
<h2 class="accordion-header" id="heading-{{$march.Name}}-{{$repo.Name}}">
<button aria-controls="collapse-{{$march.Name}}-{{$repo.Name}}" aria-expanded="false"
class="accordion-button"
data-bs-target="#collapse-{{$march.Name}}-{{$repo.Name}}"
data-bs-toggle="collapse"
type="button">{{$repo.Name}}-{{$march.Name}}
</button>
</h2>
<div aria-labelledby="heading-{{$march.Name}}-{{$repo.Name}}"
class="accordion-collapse collapse show"
data-bs-parent="#accordion-{{$march.Name}}" id="collapse-{{$march.Name}}-{{$repo.Name}}">
<div class="accordion-body overflow-auto">
<table class="table table-sorted">
<thead>
<tr>
<th scope="col">Pkgbase</th>
<th scope="col">Status</th>
<th scope="col">Reason</th>
<th class="text-center" scope="col"
title="link time optimization&#10;does not guarantee that package is actually build with LTO">
LTO
</th>
<th class="text-center" scope="col" title="Debug-symbols available via debuginfod">DS
</th>
<th scope="col">Archlinux Version</th>
<th scope="col">{{$repo.Name}}-{{$march.Name}} Version</th>
<th class="text-end" scope="col">Info</th>
</tr>
</thead>
<tbody>
{{range $pkg := $repo.Packages}}
<tr class="table-{{$pkg.Class}}"
id="{{$repo.Name}}-{{$march.Name}}-{{$pkg.Pkgbase}}">
<td>{{$pkg.Pkgbase}}</td>
<td>{{$pkg.Status}}</td>
<td>{{$pkg.Skip}}</td>
<td class="text-center fs-6">
{{if $pkg.LTO}}<i class="fa fa-check fa-lg" style="color: var(--bs-success)"
title="build with LTO"></i>{{end}}
{{if $pkg.LTODisabled}}<i class="fa fa-times fa-lg" style="color: var(--bs-danger)"
title="LTO explicitly disabled"></i>{{end}}
{{if $pkg.LTOAutoDisabled}}<i class="fa fa-times-circle-o fa-lg"
style="color: var(--bs-danger)"
title="LTO automatically disabled"></i>{{end}}
{{if $pkg.LTOUnknown}}<i class="fa fa-hourglass-o fa-lg"
title="not build with LTO yet"></i>{{end}}
</td>
<td class="text-center fs-6">
{{if $pkg.DebugSym}}<i class="fa fa-check fa-lg" style="color: var(--bs-success)"
title="Debug symbols available"></i>{{end}}
{{if $pkg.DebugSymNotAvailable}}<i class="fa fa-times fa-lg"
style="color: var(--bs-danger)"
title="Not build with debug symbols"></i>{{end}}
{{if $pkg.DebugSymUnknown}}<i class="fa fa-hourglass-o fa-lg"
title="Not build yet"></i>{{end}}
</td>
<td>{{$pkg.Svn2GitVersion}}</td>
<td>{{$pkg.Version}}</td>
<td class="text-end info-box">
{{with $pkg.Log}}<a href="{{.}}" title="build log"
><i class="fa fa-file-text fa-lg"></i></a
>{{end}}
<a class="text-decoration-none fw-bold"
href="https://archlinux.org/packages/?q={{$pkg.Pkgbase}}" title="ArchWeb">AW</a>
<a data-bs-html="true" data-bs-placement="bottom" data-bs-toggle="tooltip"
href="#{{$repo.Name}}-{{$march.Name}}-{{$pkg.Pkgbase}}"
title="{{if $pkg.BuildDate}}Build on {{$pkg.BuildDate}}&#10;{{end}}{{if $pkg.BuildDuration}}CPU-Time: {{$pkg.BuildDuration}}&#10;{{end}}Last checked on {{$pkg.Checked}}">
<i class="fa fa-info-circle fa-lg"></i></a>
</td>
</tr>
{{end}}
</tbody>
</table>
</div>
</div>
</div>
{{end}}
</div>
{{end}}
</div>
<footer class="text-center text-lg-start bg-dark mt-3 fixed-bottom">
<div class="p-2 text-center">
{{.Latest}} <span class="text-primary">build</span>
{{.Queued}} <span class="text-warning">queued</span>
{{.Skipped}} <span class="text-secondary">skipped</span>
{{.Failed}} <span class="text-danger">failed</span>
||
LTO: {{.LTOEnabled}} <span class="text-success">enabled</span>
{{.LTODisabled}} <span class="text-danger">disabled</span>
{{.LTOUnknown}} <span class="text-secondary">unknown</span>
||
<span class="text-muted">{{.Generated}}</span>
</div>
</footer>
<script crossorigin="anonymous"
integrity="sha256-9SEPo+fwJFpMUet/KACSwO+Z/dKMReF9q4zFhU/fT9M="
src="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js"></script>
<script>
let input = document.getElementById('table-sort-input');
let timeout = null;
input.addEventListener('input', function (e) {
clearTimeout(timeout);
timeout = setTimeout(searchFilter, 200);
});
function searchFilter() {
let input, filter, tr, td, i, txtValue;
input = document.getElementById('table-sort-input')
filter = input.value.toUpperCase()
const tables = document.getElementsByClassName('table-sorted');
for (let j = 0; j < tables.length; j++) {
tr = tables[j].getElementsByTagName('tr')
for (i = 0; i < tr.length; i++) {
td = tr[i].getElementsByTagName('td')[0]
if (td) {
txtValue = td.textContent || td.innerText
if (txtValue.toUpperCase().indexOf(filter) > -1) {
tr[i].style.display = ''
} else {
tr[i].style.display = 'none'
}
}
}
}
}
</script>
</body>
</html>

1034
utils.go

File diff suppressed because it is too large Load Diff