forked from Archives/Athou_commafeed
Compare commits
29 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
346fb6b1ea | ||
|
|
1b658c76a3 | ||
|
|
1593ed62ba | ||
|
|
085eddd4b0 | ||
|
|
0db77ad2c0 | ||
|
|
6f8bcb6c6a | ||
|
|
4196dee896 | ||
|
|
6d49e0f0df | ||
|
|
d99f572989 | ||
|
|
fa197c33f1 | ||
|
|
1ce39a419e | ||
|
|
f0e3ac8fcb | ||
|
|
30947cea05 | ||
|
|
9134f36d3b | ||
|
|
dc526316a0 | ||
|
|
6593174668 | ||
|
|
0891c41abc | ||
|
|
6ecb6254aa | ||
|
|
84bd9eeeff | ||
|
|
2549c4d47b | ||
|
|
8750aa3dd6 | ||
|
|
262094a736 | ||
|
|
035201f917 | ||
|
|
ae9cbc5214 | ||
|
|
78d5bf129a | ||
|
|
1f02ddd163 | ||
|
|
eff1e8cc7b | ||
|
|
dc8475b59a | ||
|
|
921968662d |
@@ -3,4 +3,6 @@
|
|||||||
|
|
||||||
# allow only what we need
|
# allow only what we need
|
||||||
!commafeed-server/target/commafeed.jar
|
!commafeed-server/target/commafeed.jar
|
||||||
|
!commafeed-server/config.docker-warmup.yml
|
||||||
!commafeed-server/config.yml.example
|
!commafeed-server/config.yml.example
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,12 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## [4.6.0]
|
||||||
|
|
||||||
|
- switched from Temurin to OpenJ9 as the JVM used in the Docker image, resulting in memory usage reduction by up to 50%
|
||||||
|
- fix an issue that could cause old entries to reappear if they were updated by their author (#1486)
|
||||||
|
- show all entries regardless of their read status when searching with keywords, even if the ui is configured to show
|
||||||
|
unread entries only
|
||||||
|
|
||||||
## [4.5.0]
|
## [4.5.0]
|
||||||
|
|
||||||
- significantly reduce the time needed to retrieve entries or mark them as read, especially when there are a lot of
|
- significantly reduce the time needed to retrieve entries or mark them as read, especially when there are a lot of
|
||||||
|
|||||||
13
Dockerfile
13
Dockerfile
@@ -1,12 +1,19 @@
|
|||||||
FROM eclipse-temurin:21.0.3_9-jre
|
FROM ibm-semeru-runtimes:open-21-jre
|
||||||
|
|
||||||
EXPOSE 8082
|
EXPOSE 8082
|
||||||
|
|
||||||
RUN mkdir -p /commafeed/data
|
RUN mkdir -p /commafeed/data
|
||||||
VOLUME /commafeed/data
|
VOLUME /commafeed/data
|
||||||
|
|
||||||
|
RUN apt update && apt install -y wait-for-it && apt clean
|
||||||
|
|
||||||
|
ENV JAVA_TOOL_OPTIONS -Djava.net.preferIPv4Stack=true -Xtune:virtualized -Xminf0.05 -Xmaxf0.1
|
||||||
|
|
||||||
|
COPY commafeed-server/config.docker-warmup.yml .
|
||||||
COPY commafeed-server/config.yml.example config.yml
|
COPY commafeed-server/config.yml.example config.yml
|
||||||
COPY commafeed-server/target/commafeed.jar .
|
COPY commafeed-server/target/commafeed.jar .
|
||||||
|
|
||||||
ENV JAVA_TOOL_OPTIONS -Djava.net.preferIPv4Stack=true -Xms20m -XX:+UseG1GC -XX:-ShrinkHeapInSteps -XX:G1PeriodicGCInterval=10000 -XX:-G1PeriodicGCInvokesConcurrent -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10
|
# build openj9 shared classes cache to improve startup time
|
||||||
CMD ["java", "-jar", "commafeed.jar", "server", "config.yml"]
|
RUN sh -c 'java -Xshareclasses -jar commafeed.jar server config.docker-warmup.yml &' ; wait-for-it -t 600 localhost:8088 -- pkill java ; rm -rf config.warmup.yml
|
||||||
|
|
||||||
|
CMD ["java", "-Xshareclasses", "-jar", "commafeed.jar", "server", "config.yml"]
|
||||||
|
|||||||
17
README.md
17
README.md
@@ -58,7 +58,7 @@ user is `admin` and the default password is `admin`.
|
|||||||
|
|
||||||
The Java Virtual Machine (JVM) is rather greedy by default and will not release unused memory to the
|
The Java Virtual Machine (JVM) is rather greedy by default and will not release unused memory to the
|
||||||
operating system. This is because acquiring memory from the operating system is a relatively expensive operation.
|
operating system. This is because acquiring memory from the operating system is a relatively expensive operation.
|
||||||
However, this can be problematic on systems with limited memory.
|
This can be problematic on systems with limited memory.
|
||||||
|
|
||||||
#### Hard limit
|
#### Hard limit
|
||||||
|
|
||||||
@@ -67,16 +67,25 @@ For example, to limit the JVM to 256MB of memory, use `-Xmx256m`.
|
|||||||
|
|
||||||
#### Dynamic sizing
|
#### Dynamic sizing
|
||||||
|
|
||||||
The JVM can be configured to release unused memory to the operating system with the following parameters:
|
In addition to the previous setting, the JVM can be configured to release unused memory to the operating system with the
|
||||||
|
following parameters:
|
||||||
|
|
||||||
-Xms20m -XX:+UseG1GC -XX:-ShrinkHeapInSteps -XX:G1PeriodicGCInterval=10000 -XX:-G1PeriodicGCInvokesConcurrent -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10
|
-Xms20m -XX:+UseG1GC -XX:+UseStringDeduplication -XX:-ShrinkHeapInSteps -XX:G1PeriodicGCInterval=10000 -XX:-G1PeriodicGCInvokesConcurrent -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10
|
||||||
|
|
||||||
This is how the Docker image is configured.
|
|
||||||
See [here](https://docs.oracle.com/en/java/javase/17/gctuning/garbage-first-g1-garbage-collector1.html)
|
See [here](https://docs.oracle.com/en/java/javase/17/gctuning/garbage-first-g1-garbage-collector1.html)
|
||||||
and [here](https://docs.oracle.com/en/java/javase/17/gctuning/factors-affecting-garbage-collection-performance.html) for
|
and [here](https://docs.oracle.com/en/java/javase/17/gctuning/factors-affecting-garbage-collection-performance.html) for
|
||||||
more
|
more
|
||||||
information.
|
information.
|
||||||
|
|
||||||
|
#### OpenJ9
|
||||||
|
|
||||||
|
The [OpenJ9](https://eclipse.dev/openj9/) JVM is a more memory-efficient alternative to the HotSpot JVM, at the cost of
|
||||||
|
slightly slower throughput.
|
||||||
|
|
||||||
|
IBM provides precompiled binaries for OpenJ9
|
||||||
|
named [Semeru](https://developer.ibm.com/languages/java/semeru-runtimes/downloads/).
|
||||||
|
This is the JVM used in the [Docker image](https://github.com/Athou/commafeed/blob/master/Dockerfile).
|
||||||
|
|
||||||
## Translation
|
## Translation
|
||||||
|
|
||||||
Files for internationalization are
|
Files for internationalization are
|
||||||
|
|||||||
763
commafeed-client/package-lock.json
generated
763
commafeed-client/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -20,12 +20,12 @@
|
|||||||
"@lingui/core": "^4.11.2",
|
"@lingui/core": "^4.11.2",
|
||||||
"@lingui/macro": "^4.11.2",
|
"@lingui/macro": "^4.11.2",
|
||||||
"@lingui/react": "^4.11.2",
|
"@lingui/react": "^4.11.2",
|
||||||
"@mantine/core": "^7.11.1",
|
"@mantine/core": "^7.11.2",
|
||||||
"@mantine/form": "^7.11.1",
|
"@mantine/form": "^7.11.2",
|
||||||
"@mantine/hooks": "^7.11.1",
|
"@mantine/hooks": "^7.11.2",
|
||||||
"@mantine/modals": "^7.11.1",
|
"@mantine/modals": "^7.11.2",
|
||||||
"@mantine/notifications": "^7.11.1",
|
"@mantine/notifications": "^7.11.2",
|
||||||
"@mantine/spotlight": "^7.11.1",
|
"@mantine/spotlight": "^7.11.2",
|
||||||
"@monaco-editor/react": "^4.6.0",
|
"@monaco-editor/react": "^4.6.0",
|
||||||
"@reduxjs/toolkit": "^2.2.6",
|
"@reduxjs/toolkit": "^2.2.6",
|
||||||
"axios": "^1.7.2",
|
"axios": "^1.7.2",
|
||||||
@@ -73,7 +73,7 @@
|
|||||||
"typescript": "^5.5.3",
|
"typescript": "^5.5.3",
|
||||||
"vite": "^5.3.3",
|
"vite": "^5.3.3",
|
||||||
"vite-tsconfig-paths": "^4.3.2",
|
"vite-tsconfig-paths": "^4.3.2",
|
||||||
"vitest": "^1.6.0",
|
"vitest": "^2.0.2",
|
||||||
"vitest-mock-extended": "^1.3.1"
|
"vitest-mock-extended": "^1.3.1"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -6,16 +6,16 @@
|
|||||||
<parent>
|
<parent>
|
||||||
<groupId>com.commafeed</groupId>
|
<groupId>com.commafeed</groupId>
|
||||||
<artifactId>commafeed</artifactId>
|
<artifactId>commafeed</artifactId>
|
||||||
<version>4.5.0</version>
|
<version>4.6.0</version>
|
||||||
</parent>
|
</parent>
|
||||||
<artifactId>commafeed-client</artifactId>
|
<artifactId>commafeed-client</artifactId>
|
||||||
<name>CommaFeed Client</name>
|
<name>CommaFeed Client</name>
|
||||||
|
|
||||||
<properties>
|
<properties>
|
||||||
<!-- renovate: datasource=node-version depName=node -->
|
<!-- renovate: datasource=node-version depName=node -->
|
||||||
<node.version>v20.15.0</node.version>
|
<node.version>v20.15.1</node.version>
|
||||||
<!-- renovate: datasource=npm depName=npm -->
|
<!-- renovate: datasource=npm depName=npm -->
|
||||||
<npm.version>10.8.1</npm.version>
|
<npm.version>10.8.2</npm.version>
|
||||||
</properties>
|
</properties>
|
||||||
|
|
||||||
<build>
|
<build>
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import { type RootState, reducers } from "app/store"
|
|||||||
import type { Entries, Entry } from "app/types"
|
import type { Entries, Entry } from "app/types"
|
||||||
import type { AxiosResponse } from "axios"
|
import type { AxiosResponse } from "axios"
|
||||||
import { beforeEach, describe, expect, it, vi } from "vitest"
|
import { beforeEach, describe, expect, it, vi } from "vitest"
|
||||||
import { mockReset } from "vitest-mock-extended"
|
import { any, mockReset } from "vitest-mock-extended"
|
||||||
|
|
||||||
const mockClient = await vi.hoisted(async () => {
|
const mockClient = await vi.hoisted(async () => {
|
||||||
const mockModule = await import("vitest-mock-extended")
|
const mockModule = await import("vitest-mock-extended")
|
||||||
@@ -19,7 +19,7 @@ describe("entries", () => {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it("loads entries", async () => {
|
it("loads entries", async () => {
|
||||||
mockClient.feed.getEntries.mockResolvedValue({
|
mockClient.feed.getEntries.calledWith(any()).mockResolvedValue({
|
||||||
data: {
|
data: {
|
||||||
entries: [{ id: "3" } as Entry],
|
entries: [{ id: "3" } as Entry],
|
||||||
hasMore: false,
|
hasMore: false,
|
||||||
@@ -53,7 +53,7 @@ describe("entries", () => {
|
|||||||
})
|
})
|
||||||
|
|
||||||
it("loads more entries", async () => {
|
it("loads more entries", async () => {
|
||||||
mockClient.category.getEntries.mockResolvedValue({
|
mockClient.category.getEntries.calledWith(any()).mockResolvedValue({
|
||||||
data: {
|
data: {
|
||||||
entries: [{ id: "4" } as Entry],
|
entries: [{ id: "4" } as Entry],
|
||||||
hasMore: false,
|
hasMore: false,
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ export const loadMoreEntries = createAppAsyncThunk("entries/loadMore", async (_,
|
|||||||
const buildGetEntriesPaginatedRequest = (state: RootState, source: EntrySource, offset: number) => ({
|
const buildGetEntriesPaginatedRequest = (state: RootState, source: EntrySource, offset: number) => ({
|
||||||
id: source.type === "tag" ? Constants.categories.all.id : source.id,
|
id: source.type === "tag" ? Constants.categories.all.id : source.id,
|
||||||
order: state.user.settings?.readingOrder,
|
order: state.user.settings?.readingOrder,
|
||||||
readType: state.user.settings?.readingMode,
|
readType: state.entries.search ? "all" : state.user.settings?.readingMode,
|
||||||
offset,
|
offset,
|
||||||
limit: 50,
|
limit: 50,
|
||||||
tag: source.type === "tag" ? source.id : undefined,
|
tag: source.type === "tag" ? source.id : undefined,
|
||||||
|
|||||||
@@ -15,7 +15,6 @@ export default defineConfig(env => ({
|
|||||||
},
|
},
|
||||||
}),
|
}),
|
||||||
lingui(),
|
lingui(),
|
||||||
// https://github.com/vitest-dev/vitest/issues/4055#issuecomment-1732994672
|
|
||||||
tsconfigPaths(),
|
tsconfigPaths(),
|
||||||
visualizer(),
|
visualizer(),
|
||||||
biomePlugin({
|
biomePlugin({
|
||||||
|
|||||||
151
commafeed-server/config.docker-warmup.yml
Normal file
151
commafeed-server/config.docker-warmup.yml
Normal file
@@ -0,0 +1,151 @@
|
|||||||
|
# CommaFeed settings
|
||||||
|
# ------------------
|
||||||
|
app:
|
||||||
|
# url used to access commafeed
|
||||||
|
publicUrl: http://localhost:8088/
|
||||||
|
|
||||||
|
# whether to expose a robots.txt file that disallows web crawlers and search engine indexers
|
||||||
|
hideFromWebCrawlers: true
|
||||||
|
|
||||||
|
# whether to allow user registrations
|
||||||
|
allowRegistrations: true
|
||||||
|
|
||||||
|
# whether to enable strict password validation (1 uppercase char, 1 lowercase char, 1 digit, 1 special char)
|
||||||
|
strictPasswordPolicy: true
|
||||||
|
|
||||||
|
# create a demo account the first time the app starts
|
||||||
|
createDemoAccount: true
|
||||||
|
|
||||||
|
# put your google analytics tracking code here
|
||||||
|
googleAnalyticsTrackingCode:
|
||||||
|
|
||||||
|
# put your google server key (used for youtube favicon fetching)
|
||||||
|
googleAuthKey:
|
||||||
|
|
||||||
|
# number of http threads
|
||||||
|
backgroundThreads: 3
|
||||||
|
|
||||||
|
# number of database updating threads
|
||||||
|
databaseUpdateThreads: 1
|
||||||
|
|
||||||
|
# rows to delete per query while cleaning up old entries
|
||||||
|
databaseCleanupBatchSize: 100
|
||||||
|
|
||||||
|
# settings for sending emails (password recovery)
|
||||||
|
smtpHost: localhost
|
||||||
|
smtpPort: 25
|
||||||
|
smtpTls: false
|
||||||
|
smtpUserName: user
|
||||||
|
smtpPassword: pass
|
||||||
|
smtpFromAddress:
|
||||||
|
|
||||||
|
# Graphite Metric settings
|
||||||
|
# Allows those who use Graphite to have CommaFeed send metrics for graphing (time in seconds)
|
||||||
|
graphiteEnabled: false
|
||||||
|
graphitePrefix: "test.commafeed"
|
||||||
|
graphiteHost: "localhost"
|
||||||
|
graphitePort: 2003
|
||||||
|
graphiteInterval: 60
|
||||||
|
|
||||||
|
# whether this commafeed instance has a lot of feeds to refresh
|
||||||
|
# leave this to false in almost all cases
|
||||||
|
heavyLoad: false
|
||||||
|
|
||||||
|
# minimum amount of time commafeed will wait before refreshing the same feed
|
||||||
|
refreshIntervalMinutes: 5
|
||||||
|
|
||||||
|
# if enabled, images in feed entries will be proxied through the server instead of accessed directly by the browser
|
||||||
|
# useful if commafeed is usually accessed through a restricting proxy
|
||||||
|
imageProxyEnabled: true
|
||||||
|
|
||||||
|
# database query timeout (in milliseconds), 0 to disable
|
||||||
|
queryTimeout: 0
|
||||||
|
|
||||||
|
# time to keep unread statuses (in days), 0 to disable
|
||||||
|
keepStatusDays: 0
|
||||||
|
|
||||||
|
# entries to keep per feed, old entries will be deleted, 0 to disable
|
||||||
|
maxFeedCapacity: 500
|
||||||
|
|
||||||
|
# entries older than this will be deleted, 0 to disable
|
||||||
|
maxEntriesAgeDays: 365
|
||||||
|
|
||||||
|
# limit the number of feeds a user can subscribe to, 0 to disable
|
||||||
|
maxFeedsPerUser: 0
|
||||||
|
|
||||||
|
# cache service to use, possible values are 'noop' and 'redis'
|
||||||
|
cache: noop
|
||||||
|
|
||||||
|
# announcement string displayed on the main page
|
||||||
|
announcement:
|
||||||
|
|
||||||
|
# user-agent string that will be used by the http client, leave empty for the default one
|
||||||
|
userAgent:
|
||||||
|
|
||||||
|
# enable websocket connection so the server can notify the web client that there are new entries for your feeds
|
||||||
|
websocketEnabled: true
|
||||||
|
|
||||||
|
# interval at which the client will send a ping message on the websocket to keep the connection alive
|
||||||
|
websocketPingInterval: 15m
|
||||||
|
|
||||||
|
# if websocket is disabled or the connection is lost, the client will reload the feed tree at this interval
|
||||||
|
treeReloadInterval: 30s
|
||||||
|
|
||||||
|
# Database connection
|
||||||
|
# -------------------
|
||||||
|
# for MariaDB
|
||||||
|
# driverClass is org.mariadb.jdbc.Driver
|
||||||
|
# url is jdbc:mariadb://localhost/commafeed?autoReconnect=true&failOverReadOnly=false&maxReconnects=20&rewriteBatchedStatements=true&timezone=UTC
|
||||||
|
#
|
||||||
|
# for MySQL
|
||||||
|
# driverClass is com.mysql.cj.jdbc.Driver
|
||||||
|
# url is jdbc:mysql://localhost/commafeed?autoReconnect=true&failOverReadOnly=false&maxReconnects=20&rewriteBatchedStatements=true&timezone=UTC
|
||||||
|
#
|
||||||
|
# for PostgreSQL
|
||||||
|
# driverClass is org.postgresql.Driver
|
||||||
|
# url is jdbc:postgresql://localhost:5432/commafeed
|
||||||
|
|
||||||
|
database:
|
||||||
|
driverClass: org.h2.Driver
|
||||||
|
url: jdbc:h2:mem:commafeed
|
||||||
|
user: sa
|
||||||
|
password: sa
|
||||||
|
properties:
|
||||||
|
charSet: UTF-8
|
||||||
|
validationQuery: "/* CommaFeed Health Check */ SELECT 1"
|
||||||
|
|
||||||
|
server:
|
||||||
|
applicationConnectors:
|
||||||
|
- type: http
|
||||||
|
port: 8088
|
||||||
|
|
||||||
|
logging:
|
||||||
|
level: INFO
|
||||||
|
loggers:
|
||||||
|
com.commafeed: DEBUG
|
||||||
|
liquibase: INFO
|
||||||
|
org.hibernate.SQL: INFO # or ALL for sql debugging
|
||||||
|
org.hibernate.engine.internal.StatisticalLoggingSessionEventListener: WARN
|
||||||
|
appenders:
|
||||||
|
- type: console
|
||||||
|
- type: file
|
||||||
|
currentLogFilename: log/commafeed.log
|
||||||
|
threshold: ALL
|
||||||
|
archive: true
|
||||||
|
archivedLogFilenamePattern: log/commafeed-%d.log
|
||||||
|
archivedFileCount: 5
|
||||||
|
timeZone: UTC
|
||||||
|
|
||||||
|
# Redis pool configuration
|
||||||
|
# (only used if app.cache is 'redis')
|
||||||
|
# -----------------------------------
|
||||||
|
redis:
|
||||||
|
host: localhost
|
||||||
|
port: 6379
|
||||||
|
# username is only required when using ACLs
|
||||||
|
username:
|
||||||
|
password:
|
||||||
|
timeout: 2000
|
||||||
|
database: 0
|
||||||
|
maxTotal: 500
|
||||||
|
|
||||||
@@ -6,7 +6,7 @@
|
|||||||
<parent>
|
<parent>
|
||||||
<groupId>com.commafeed</groupId>
|
<groupId>com.commafeed</groupId>
|
||||||
<artifactId>commafeed</artifactId>
|
<artifactId>commafeed</artifactId>
|
||||||
<version>4.5.0</version>
|
<version>4.6.0</version>
|
||||||
</parent>
|
</parent>
|
||||||
<artifactId>commafeed-server</artifactId>
|
<artifactId>commafeed-server</artifactId>
|
||||||
<name>CommaFeed Server</name>
|
<name>CommaFeed Server</name>
|
||||||
@@ -59,12 +59,12 @@
|
|||||||
<plugin>
|
<plugin>
|
||||||
<groupId>org.apache.maven.plugins</groupId>
|
<groupId>org.apache.maven.plugins</groupId>
|
||||||
<artifactId>maven-surefire-plugin</artifactId>
|
<artifactId>maven-surefire-plugin</artifactId>
|
||||||
<version>3.3.0</version>
|
<version>3.3.1</version>
|
||||||
</plugin>
|
</plugin>
|
||||||
<plugin>
|
<plugin>
|
||||||
<groupId>org.apache.maven.plugins</groupId>
|
<groupId>org.apache.maven.plugins</groupId>
|
||||||
<artifactId>maven-failsafe-plugin</artifactId>
|
<artifactId>maven-failsafe-plugin</artifactId>
|
||||||
<version>3.3.0</version>
|
<version>3.3.1</version>
|
||||||
<executions>
|
<executions>
|
||||||
<execution>
|
<execution>
|
||||||
<goals>
|
<goals>
|
||||||
@@ -236,7 +236,7 @@
|
|||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.commafeed</groupId>
|
<groupId>com.commafeed</groupId>
|
||||||
<artifactId>commafeed-client</artifactId>
|
<artifactId>commafeed-client</artifactId>
|
||||||
<version>4.5.0</version>
|
<version>4.6.0</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
@@ -383,7 +383,7 @@
|
|||||||
<dependency>
|
<dependency>
|
||||||
<groupId>org.jsoup</groupId>
|
<groupId>org.jsoup</groupId>
|
||||||
<artifactId>jsoup</artifactId>
|
<artifactId>jsoup</artifactId>
|
||||||
<version>1.17.2</version>
|
<version>1.18.1</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.ibm.icu</groupId>
|
<groupId>com.ibm.icu</groupId>
|
||||||
@@ -428,7 +428,7 @@
|
|||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.manticore-projects.tools</groupId>
|
<groupId>com.manticore-projects.tools</groupId>
|
||||||
<artifactId>h2migrationtool</artifactId>
|
<artifactId>h2migrationtool</artifactId>
|
||||||
<version>1.6</version>
|
<version>1.7</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|||||||
@@ -50,7 +50,11 @@ public class FeedEntryDAO extends GenericDAO<FeedEntry> {
|
|||||||
* Delete entries older than a certain date
|
* Delete entries older than a certain date
|
||||||
*/
|
*/
|
||||||
public int deleteEntriesOlderThan(Instant olderThan, long max) {
|
public int deleteEntriesOlderThan(Instant olderThan, long max) {
|
||||||
List<FeedEntry> list = query().selectFrom(ENTRY).where(ENTRY.updated.lt(olderThan)).orderBy(ENTRY.updated.asc()).limit(max).fetch();
|
List<FeedEntry> list = query().selectFrom(ENTRY)
|
||||||
|
.where(ENTRY.published.lt(olderThan))
|
||||||
|
.orderBy(ENTRY.published.asc())
|
||||||
|
.limit(max)
|
||||||
|
.fetch();
|
||||||
return delete(list);
|
return delete(list);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -58,7 +62,7 @@ public class FeedEntryDAO extends GenericDAO<FeedEntry> {
|
|||||||
* Delete the oldest entries of a feed
|
* Delete the oldest entries of a feed
|
||||||
*/
|
*/
|
||||||
public int deleteOldEntries(Long feedId, long max) {
|
public int deleteOldEntries(Long feedId, long max) {
|
||||||
List<FeedEntry> list = query().selectFrom(ENTRY).where(ENTRY.feed.id.eq(feedId)).orderBy(ENTRY.updated.asc()).limit(max).fetch();
|
List<FeedEntry> list = query().selectFrom(ENTRY).where(ENTRY.feed.id.eq(feedId)).orderBy(ENTRY.published.asc()).limit(max).fetch();
|
||||||
return delete(list);
|
return delete(list);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -61,7 +61,7 @@ public class FeedEntryStatusDAO extends GenericDAO<FeedEntryStatus> {
|
|||||||
private FeedEntryStatus handleStatus(User user, FeedEntryStatus status, FeedSubscription sub, FeedEntry entry) {
|
private FeedEntryStatus handleStatus(User user, FeedEntryStatus status, FeedSubscription sub, FeedEntry entry) {
|
||||||
if (status == null) {
|
if (status == null) {
|
||||||
Instant unreadThreshold = config.getApplicationSettings().getUnreadThreshold();
|
Instant unreadThreshold = config.getApplicationSettings().getUnreadThreshold();
|
||||||
boolean read = unreadThreshold != null && entry.getUpdated().isBefore(unreadThreshold);
|
boolean read = unreadThreshold != null && entry.getPublished().isBefore(unreadThreshold);
|
||||||
status = new FeedEntryStatus(user, sub, entry);
|
status = new FeedEntryStatus(user, sub, entry);
|
||||||
status.setRead(read);
|
status.setRead(read);
|
||||||
status.setMarkable(!read);
|
status.setMarkable(!read);
|
||||||
@@ -92,9 +92,9 @@ public class FeedEntryStatusDAO extends GenericDAO<FeedEntryStatus> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (order == ReadingOrder.asc) {
|
if (order == ReadingOrder.asc) {
|
||||||
query.orderBy(STATUS.entryUpdated.asc(), STATUS.id.asc());
|
query.orderBy(STATUS.entryPublished.asc(), STATUS.id.asc());
|
||||||
} else {
|
} else {
|
||||||
query.orderBy(STATUS.entryUpdated.desc(), STATUS.id.desc());
|
query.orderBy(STATUS.entryPublished.desc(), STATUS.id.desc());
|
||||||
}
|
}
|
||||||
|
|
||||||
if (offset > -1) {
|
if (offset > -1) {
|
||||||
@@ -165,9 +165,9 @@ public class FeedEntryStatusDAO extends GenericDAO<FeedEntryStatus> {
|
|||||||
|
|
||||||
if (order != null) {
|
if (order != null) {
|
||||||
if (order == ReadingOrder.asc) {
|
if (order == ReadingOrder.asc) {
|
||||||
query.orderBy(ENTRY.updated.asc(), ENTRY.id.asc());
|
query.orderBy(ENTRY.published.asc(), ENTRY.id.asc());
|
||||||
} else {
|
} else {
|
||||||
query.orderBy(ENTRY.updated.desc(), ENTRY.id.desc());
|
query.orderBy(ENTRY.published.desc(), ENTRY.id.desc());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -199,7 +199,7 @@ public class FeedEntryStatusDAO extends GenericDAO<FeedEntryStatus> {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public UnreadCount getUnreadCount(FeedSubscription sub) {
|
public UnreadCount getUnreadCount(FeedSubscription sub) {
|
||||||
JPAQuery<Tuple> query = query().select(ENTRY.count(), ENTRY.updated.max())
|
JPAQuery<Tuple> query = query().select(ENTRY.count(), ENTRY.published.max())
|
||||||
.from(ENTRY)
|
.from(ENTRY)
|
||||||
.leftJoin(ENTRY.statuses, STATUS)
|
.leftJoin(ENTRY.statuses, STATUS)
|
||||||
.on(STATUS.subscription.eq(sub))
|
.on(STATUS.subscription.eq(sub))
|
||||||
@@ -208,8 +208,8 @@ public class FeedEntryStatusDAO extends GenericDAO<FeedEntryStatus> {
|
|||||||
|
|
||||||
Tuple tuple = query.fetchOne();
|
Tuple tuple = query.fetchOne();
|
||||||
Long count = tuple.get(ENTRY.count());
|
Long count = tuple.get(ENTRY.count());
|
||||||
Instant updated = tuple.get(ENTRY.updated.max());
|
Instant published = tuple.get(ENTRY.published.max());
|
||||||
return new UnreadCount(sub.getId(), count == null ? 0 : count, updated);
|
return new UnreadCount(sub.getId(), count == null ? 0 : count, published);
|
||||||
}
|
}
|
||||||
|
|
||||||
private BooleanBuilder buildUnreadPredicate() {
|
private BooleanBuilder buildUnreadPredicate() {
|
||||||
@@ -219,7 +219,7 @@ public class FeedEntryStatusDAO extends GenericDAO<FeedEntryStatus> {
|
|||||||
|
|
||||||
Instant unreadThreshold = config.getApplicationSettings().getUnreadThreshold();
|
Instant unreadThreshold = config.getApplicationSettings().getUnreadThreshold();
|
||||||
if (unreadThreshold != null) {
|
if (unreadThreshold != null) {
|
||||||
return or.and(ENTRY.updated.goe(unreadThreshold));
|
return or.and(ENTRY.published.goe(unreadThreshold));
|
||||||
} else {
|
} else {
|
||||||
return or;
|
return or;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -59,7 +59,7 @@ public class FeedRefreshWorker {
|
|||||||
Integer maxEntriesAgeDays = config.getApplicationSettings().getMaxEntriesAgeDays();
|
Integer maxEntriesAgeDays = config.getApplicationSettings().getMaxEntriesAgeDays();
|
||||||
if (maxEntriesAgeDays > 0) {
|
if (maxEntriesAgeDays > 0) {
|
||||||
Instant threshold = Instant.now().minus(Duration.ofDays(maxEntriesAgeDays));
|
Instant threshold = Instant.now().minus(Duration.ofDays(maxEntriesAgeDays));
|
||||||
entries = entries.stream().filter(entry -> entry.updated().isAfter(threshold)).toList();
|
entries = entries.stream().filter(entry -> entry.published().isAfter(threshold)).toList();
|
||||||
}
|
}
|
||||||
|
|
||||||
String urlAfterRedirect = result.urlAfterRedirect();
|
String urlAfterRedirect = result.urlAfterRedirect();
|
||||||
|
|||||||
@@ -73,7 +73,7 @@ public class FeedParser {
|
|||||||
String title = feed.getTitle();
|
String title = feed.getTitle();
|
||||||
String link = feed.getLink();
|
String link = feed.getLink();
|
||||||
List<Entry> entries = buildEntries(feed, feedUrl);
|
List<Entry> entries = buildEntries(feed, feedUrl);
|
||||||
Instant lastEntryDate = entries.stream().findFirst().map(Entry::updated).orElse(null);
|
Instant lastEntryDate = entries.stream().findFirst().map(Entry::published).orElse(null);
|
||||||
Instant lastPublishedDate = toValidInstant(feed.getPublishedDate(), false);
|
Instant lastPublishedDate = toValidInstant(feed.getPublishedDate(), false);
|
||||||
if (lastPublishedDate == null || lastEntryDate != null && lastPublishedDate.isBefore(lastEntryDate)) {
|
if (lastPublishedDate == null || lastEntryDate != null && lastPublishedDate.isBefore(lastEntryDate)) {
|
||||||
lastPublishedDate = lastEntryDate;
|
lastPublishedDate = lastEntryDate;
|
||||||
@@ -123,13 +123,13 @@ public class FeedParser {
|
|||||||
url = guid;
|
url = guid;
|
||||||
}
|
}
|
||||||
|
|
||||||
Instant updated = buildEntryUpdateDate(item);
|
Instant publishedDate = buildEntryPublishedDate(item);
|
||||||
Content content = buildContent(item);
|
Content content = buildContent(item);
|
||||||
|
|
||||||
entries.add(new Entry(guid, url, updated, content));
|
entries.add(new Entry(guid, url, publishedDate, content));
|
||||||
}
|
}
|
||||||
|
|
||||||
entries.sort(Comparator.comparing(Entry::updated).reversed());
|
entries.sort(Comparator.comparing(Entry::published).reversed());
|
||||||
return entries;
|
return entries;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -154,10 +154,10 @@ public class FeedParser {
|
|||||||
return new Enclosure(enclosure.getUrl(), enclosure.getType());
|
return new Enclosure(enclosure.getUrl(), enclosure.getType());
|
||||||
}
|
}
|
||||||
|
|
||||||
private Instant buildEntryUpdateDate(SyndEntry item) {
|
private Instant buildEntryPublishedDate(SyndEntry item) {
|
||||||
Date date = item.getUpdatedDate();
|
Date date = item.getPublishedDate();
|
||||||
if (date == null) {
|
if (date == null) {
|
||||||
date = item.getPublishedDate();
|
date = item.getUpdatedDate();
|
||||||
}
|
}
|
||||||
return toValidInstant(date, true);
|
return toValidInstant(date, true);
|
||||||
}
|
}
|
||||||
@@ -262,7 +262,7 @@ public class FeedParser {
|
|||||||
|
|
||||||
SummaryStatistics stats = new SummaryStatistics();
|
SummaryStatistics stats = new SummaryStatistics();
|
||||||
for (int i = 0; i < entries.size() - 1; i++) {
|
for (int i = 0; i < entries.size() - 1; i++) {
|
||||||
long diff = Math.abs(entries.get(i).updated().toEpochMilli() - entries.get(i + 1).updated().toEpochMilli());
|
long diff = Math.abs(entries.get(i).published().toEpochMilli() - entries.get(i + 1).published().toEpochMilli());
|
||||||
stats.addValue(diff);
|
stats.addValue(diff);
|
||||||
}
|
}
|
||||||
return (long) stats.getMean();
|
return (long) stats.getMean();
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import java.util.List;
|
|||||||
|
|
||||||
public record FeedParserResult(String title, String link, Instant lastPublishedDate, Long averageEntryInterval, Instant lastEntryDate,
|
public record FeedParserResult(String title, String link, Instant lastPublishedDate, Long averageEntryInterval, Instant lastEntryDate,
|
||||||
List<Entry> entries) {
|
List<Entry> entries) {
|
||||||
public record Entry(String guid, String url, Instant updated, Content content) {
|
public record Entry(String guid, String url, Instant published, Content content) {
|
||||||
}
|
}
|
||||||
|
|
||||||
public record Content(String title, String content, String author, String categories, Enclosure enclosure, Media media) {
|
public record Content(String title, String content, String author, String categories, Enclosure enclosure, Media media) {
|
||||||
|
|||||||
@@ -37,11 +37,18 @@ public class FeedEntry extends AbstractModel {
|
|||||||
@Column(length = 2048)
|
@Column(length = 2048)
|
||||||
private String url;
|
private String url;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* the moment the entry was inserted in the database
|
||||||
|
*/
|
||||||
@Column
|
@Column
|
||||||
private Instant inserted;
|
private Instant inserted;
|
||||||
|
|
||||||
@Column
|
/**
|
||||||
private Instant updated;
|
* the moment the entry was published in the feed
|
||||||
|
*
|
||||||
|
*/
|
||||||
|
@Column(name = "updated")
|
||||||
|
private Instant published;
|
||||||
|
|
||||||
@OneToMany(mappedBy = "entry", cascade = CascadeType.REMOVE)
|
@OneToMany(mappedBy = "entry", cascade = CascadeType.REMOVE)
|
||||||
private Set<FeedEntryStatus> statuses;
|
private Set<FeedEntryStatus> statuses;
|
||||||
|
|||||||
@@ -50,8 +50,8 @@ public class FeedEntryStatus extends AbstractModel {
|
|||||||
@Column
|
@Column
|
||||||
private Instant entryInserted;
|
private Instant entryInserted;
|
||||||
|
|
||||||
@Column
|
@Column(name = "entryUpdated")
|
||||||
private Instant entryUpdated;
|
private Instant entryPublished;
|
||||||
|
|
||||||
public FeedEntryStatus() {
|
public FeedEntryStatus() {
|
||||||
|
|
||||||
@@ -62,7 +62,7 @@ public class FeedEntryStatus extends AbstractModel {
|
|||||||
this.subscription = subscription;
|
this.subscription = subscription;
|
||||||
this.entry = entry;
|
this.entry = entry;
|
||||||
this.entryInserted = entry.getInserted();
|
this.entryInserted = entry.getInserted();
|
||||||
this.entryUpdated = entry.getUpdated();
|
this.entryPublished = entry.getPublished();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ public class FeedEntryService {
|
|||||||
feedEntry.setGuid(FeedUtils.truncate(entry.guid(), 2048));
|
feedEntry.setGuid(FeedUtils.truncate(entry.guid(), 2048));
|
||||||
feedEntry.setGuidHash(Digests.sha1Hex(entry.guid()));
|
feedEntry.setGuidHash(Digests.sha1Hex(entry.guid()));
|
||||||
feedEntry.setUrl(FeedUtils.truncate(entry.url(), 2048));
|
feedEntry.setUrl(FeedUtils.truncate(entry.url(), 2048));
|
||||||
feedEntry.setUpdated(entry.updated());
|
feedEntry.setPublished(entry.published());
|
||||||
feedEntry.setInserted(Instant.now());
|
feedEntry.setInserted(Instant.now());
|
||||||
feedEntry.setFeed(feed);
|
feedEntry.setFeed(feed);
|
||||||
feedEntry.setContent(feedEntryContentService.findOrCreate(entry.content(), feed.getLink()));
|
feedEntry.setContent(feedEntryContentService.findOrCreate(entry.content(), feed.getLink()));
|
||||||
@@ -124,7 +124,7 @@ public class FeedEntryService {
|
|||||||
|
|
||||||
private void markList(List<FeedEntryStatus> statuses, Instant olderThan, Instant insertedBefore) {
|
private void markList(List<FeedEntryStatus> statuses, Instant olderThan, Instant insertedBefore) {
|
||||||
List<FeedEntryStatus> statusesToMark = statuses.stream().filter(FeedEntryStatus::isMarkable).filter(s -> {
|
List<FeedEntryStatus> statusesToMark = statuses.stream().filter(FeedEntryStatus::isMarkable).filter(s -> {
|
||||||
Instant entryDate = s.getEntry().getUpdated();
|
Instant entryDate = s.getEntry().getPublished();
|
||||||
return olderThan == null || entryDate == null || entryDate.isBefore(olderThan);
|
return olderThan == null || entryDate == null || entryDate.isBefore(olderThan);
|
||||||
}).filter(s -> {
|
}).filter(s -> {
|
||||||
Instant insertedDate = s.getEntry().getInserted();
|
Instant insertedDate = s.getEntry().getInserted();
|
||||||
|
|||||||
@@ -115,7 +115,7 @@ public class Entry implements Serializable {
|
|||||||
entry.setRead(status.isRead());
|
entry.setRead(status.isRead());
|
||||||
entry.setStarred(status.isStarred());
|
entry.setStarred(status.isStarred());
|
||||||
entry.setMarkable(status.isMarkable());
|
entry.setMarkable(status.isMarkable());
|
||||||
entry.setDate(feedEntry.getUpdated());
|
entry.setDate(feedEntry.getPublished());
|
||||||
entry.setInsertedDate(feedEntry.getInserted());
|
entry.setInsertedDate(feedEntry.getInserted());
|
||||||
entry.setUrl(feedEntry.getUrl());
|
entry.setUrl(feedEntry.getUrl());
|
||||||
entry.setFeedName(sub.getTitle());
|
entry.setFeedName(sub.getTitle());
|
||||||
|
|||||||
@@ -295,7 +295,7 @@ public class FeverREST {
|
|||||||
i.setUrl(s.getEntry().getUrl());
|
i.setUrl(s.getEntry().getUrl());
|
||||||
i.setSaved(s.isStarred());
|
i.setSaved(s.isStarred());
|
||||||
i.setRead(s.isRead());
|
i.setRead(s.isRead());
|
||||||
i.setCreatedOnTime(s.getEntryUpdated().getEpochSecond());
|
i.setCreatedOnTime(s.getEntryPublished().getEpochSecond());
|
||||||
return i;
|
return i;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
2
pom.xml
2
pom.xml
@@ -5,7 +5,7 @@
|
|||||||
|
|
||||||
<groupId>com.commafeed</groupId>
|
<groupId>com.commafeed</groupId>
|
||||||
<artifactId>commafeed</artifactId>
|
<artifactId>commafeed</artifactId>
|
||||||
<version>4.5.0</version>
|
<version>4.6.0</version>
|
||||||
<name>CommaFeed</name>
|
<name>CommaFeed</name>
|
||||||
<packaging>pom</packaging>
|
<packaging>pom</packaging>
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user