Create an account

Very important

  • To access the important data of the forums, you must be active in each forum and especially in the leaks and database leaks section, send data and after sending the data and activity, data and important content will be opened and visible for you.
  • You will only see chat messages from people who are at or below your level.
  • More than 500,000 database leaks and millions of account leaks are waiting for you, so access and view with more activity.
  • Many important data are inactive and inaccessible for you, so open them with activity. (This will be done automatically)


Thread Rating:
  • 944 Vote(s) - 3.5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Spring and scheduled tasks on multiple instances

#1
We have a Spring Boot application, and have scheduled tasks.

We want to deploy our application on multiple servers, so will be multiple instances of application.

How to configure Spring to run scheduled tasks only on specified servers?
Reply

#2
The simplest way to do it with Spring is using an environment variable and Value annotation:



1 - Get the environment variable with Value annotation in your class:

@Value("${TASK_ENABLED}")
private boolean taskEnabled;

2 - Check the taskEnabled value to execute the task:



@Scheduled(fixedDelay = 50000)
public void myTask() {
if (this.taskEnabled) {
//do stuff here...
}
}


3 - Set the correct environment variable per server:

false:

java -DTASK_ENABLED=0 -jar software.jar
or

true:

java -DTASK_ENABLED=1 -jar software.jar



**Example with a global configuration class**

To use a global configuration class, you should say to spring it's a component with a @Component and annotate a set method to pass the value to static field.

1 - Create the configuration class with static fields:

@Component
public class AppConfiguration {

public static boolean taskEnabled;

@Value("${TASK_ENABLED}")
public void setTaskEnabled(boolean taskEnabled) {
this.taskEnabled = taskEnabled;
}
}

2 - Check the taskEnabled value to execute the task:

@Scheduled(fixedDelay = 50000)
public void myTask() {
if (AppConfiguration.taskEnabled) {
//do stuff here...
}
}


3 - Set the correct environment variable per server:

false:

java -DTASK_ENABLED=0 -jar software.jar
or

true:

java -DTASK_ENABLED=1 -jar software.jar
Reply

#3
I think the help you need is in one of the answers from another post.

See this post:

[To see links please register here]

Reply

#4
The simplest solution can be you can use different properties files for different instances. Here are the steps

1. Annotate your scheduler class with `@ConditionalOnProperty(prefix = "enable-scheduler", havingValue = "true")`
2. Add a boolean in properties file `enable-scheduler=true`
3. Now for any instance use `enable-scheduler=true` and for any other ones use `enable-scheduler=false` in your properties file.

Example:

@Component
@ConditionalOnProperty(prefix = "enable-scheduler", havingValue = "true")
public class AnyScheduler {

private final Logger log = LoggerFactory.getLogger(getClass());

private final AnyService service;

@Autowired
public AnyScheduler(AnyService service) {
this.service = service;
}

@Scheduled(cron = "${scheduler-cron}")
public void syncModifiedCve() {
log.info("Scheduler started. . .");
service.doTask();
}

}
Reply

#5
One of the best options - use Quartz scheduler with clustering.
It's simple, just:

```groovy
implementation("org.springframework.boot:spring-boot-starter-quartz")
```

And configure jobs for quartz with spring (see [tutorial][1])

Clustering configs in application.yaml:
```yaml
spring:
datasource: ... # define jdbc datasource
quartz:
job-store-type: jdbc # Database Mode
jdbc:
initialize-schema: never # For clustering do not initialize table structure
properties:
org.quartz:
scheduler:
instanceId: AUTO #Default hostname and timestamp generate instance ID, which can be any string, but must be the only corresponding qrtz_scheduler_state INSTANCE_NAME field for all dispatchers
#instanceName: clusteredScheduler #quartzScheduler
jobStore:
class: org.quartz.impl.jdbcjobstore.JobStoreTX #Persistence Configuration
driverDelegateClass: org.quartz.impl.jdbcjobstore.StdJDBCDelegate #We only make database-specific proxies for databases
useProperties: true #Indicates that JDBC JobStore stores all values in JobDataMaps as strings, so more complex objects can be stored as name-value pairs rather than serialized in BLOB columns.In the long run, this is safer because you avoid serializing non-String classes to BLOB class versions.
tablePrefix: QRTZ_ #Database Table Prefix
misfireThreshold: 60000 #The number of milliseconds the dispatcher will "tolerate" a Trigger to pass its next startup time before being considered a "fire".The default value (if you do not enter this property in the configuration) is 60000 (60 seconds).
clusterCheckinInterval: 5000 #Set the frequency (in milliseconds) of this instance'checkin'* with other instances of the cluster.Affects the speed of detecting failed instances.
isClustered: true #Turn on Clustering
threadPool: #Connection Pool
class: org.quartz.simpl.SimpleThreadPool
threadCount: 10
threadPriority: 5
threadsInheritContextClassLoaderOfInitializingThread: true
```

Attention on `initialize-schema: never` - you need to initialize it by yourself for cluster mode

See official scripts:

[To see links please register here]

And you can use it through liquibase/flyway/etc, but remove `DROP ...` queries! That's why in cluster we don't initialize schema automatically.


See [quartz docs][2]
See [spring boot docs quartz][3]
See [article with example](

[To see links please register here]

)


[1]:

[To see links please register here]

[2]:

[To see links please register here]

[3]:

[To see links please register here]

Reply

#6
The **Spring - ShedLock** project is specifically created to achieve this.

Dependency -

<groupId>net.javacrumbs.shedlock</groupId>
<artifactId>shedlock-spring</artifactId>


Configuration -

@EnableScheduling
@EnableSchedulerLock(defaultLockAtMostFor = "PT30S")

Implementation -

@Scheduled(cron = "0 0/15 * * * ?")
@SchedulerLock(name = "AnyUniqueName",
lockAtLeastForString = "PT5M", lockAtMostForString = "PT10M")
public void scheduledTask() {
// ...
}


This setup will make sure that exactly one instance should run the scheduled task.

If you want only a specific instance should run the Scheduler task,

You need to config your scheduler to use the properties file and control the Scheduler switch like this -

```
@ConditionalOnProperty(
value = "scheduling.enabled", havingValue = "true", matchIfMissing = true
)
@Configuration
@EnableScheduling
@EnableSchedulerLock(defaultLockAtMostFor = "PT30S")
public class SchedulingConfig {
```
Now, you need to provide a property `scheduling.enabled = true` in your `application.properties` file, for the instance from which you want Schedular to be run.

Follow this [link][1] for complete implementation.


[1]:

[To see links please register here]

Reply

#7
We had the same usecase but weren't allowed to use database.
Simple hack,just create a file at a shared location , the instance which is able to create the file will run the scheduled process.
```
File file =new File(path);
if(file.createNewFile()){
//run task
}
```
You can also add a random sleep time before creating file.
```
SecureRandom secureRandom =new SecureRandom();
Thread.sleep(secureRandom.nextInt(100));
```
Reply

#8
This is a very wide topic. And there are many options to achieve this.

1. You can configure your application to have multiple profiles. For example use another profile 'cron' . And start your application on only one server with this profile. So for example, on a production environment you have three servers (S1, S2, S3), then you could run on S1 with profile prod and cron(`-Dspring.profiles.active=prod,cron`). And on S2 and S3 just use prod profile(`-Dspring.profiles.active=prod`).

And in code, you can use `@Profile("cron")` on scheduler classes. This way it will be executed only when cron profile is active

2. Use a distributed lock. If you have Zookeeper in your environment, you can use this to achieve distributed locking system.

3. You can use some database(mysql) and create a sample code to get a lock on one of the table and add an entry. And whichever instance gets the lock, will make an entry in this database and will execute the cron job. You need to
put a check in your code, if `getLock()` is successfull only then proceed with execution. Mysql has utilities like `LOCK TABLES`, which you could use to get away with concurrent read/writes.

4. Use [Spring shedlock][1]. This library aims to solve this problem quite elegantly and with minimum code. Have a look at an example [here][2]

personally I would say, option 2 or option 4 is the best of all.


[1]:

[To see links please register here]

[2]:

[To see links please register here]

Reply

#9
This is an addition to the [answer](

[To see links please register here]

) by [Alexey Stepanov](

[To see links please register here]

).
I hope this information will be useful.

Below is an example of a multi-instance Spring Boot application that launches a cron job.
The Job must be running on only one of the instances.
The configuration of each instance must be the same.
If a job crashes, it should try to restart 3 times with a delay of 5 minutes * number of restart attempts.
If the job still crashes after 3 restarts, the default cron for our job trigger should be set.


**We will use Quartz in cluster mode:**

Deps:
```groovy
implementation("org.springframework.boot:spring-boot-starter-quartz")
```

At first, it is a bad idea to use Thread.sleep(600000) as said in this [answer](

[To see links please register here]

)
*Out job:*
```kotlin
@Component
@Profile("quartz")
class SomeJob(
private val someService: SomeService
) : QuartzJobBean() {
private val log: Logger = LoggerFactory.getLogger(SomeJob::class.java)

override fun executeInternal(jobExecutionContext: JobExecutionContext) {
try {
log.info("Doing awesome work...")
someService.work()
if ((1..10).random() >= 5) throw RuntimeException("Something went wrong...")
} catch (e: Exception) {
throw JobExecutionException(e)
}
}
}
```

Here is the Quartz configuration (more information [here](

[To see links please register here]

)):
```kotlin
@Configuration
@Profile("quartz")
class JobConfig {
//JobDetail for our job
@Bean
fun someJobDetail(): JobDetail {
return JobBuilder
.newJob(SomeJob::class.java).withIdentity("SomeJob")
.withDescription("Some job")
//If we want the job to be launched after the application instance crashes at the
//next launch
.requestRecovery(true)
.storeDurably().build()
}

//Trigger
@Bean
fun someJobTrigger(someJobDetail: JobDetail): Trigger {
return TriggerBuilder.newTrigger().forJob(someJobDetail)
.withIdentity("SomeJobTrigger")
.withSchedule(CronScheduleBuilder.cronSchedule("0 0 4 L-1 * ? *"))
.build()

}

//Otherwise, changing cron for an existing trigger will not work. (the old cron value will be stored in the database)
@Bean
fun scheduler(triggers: List<Trigger>, jobDetails: List<JobDetail>, factory: SchedulerFactoryBean): Scheduler {
factory.setWaitForJobsToCompleteOnShutdown(true)
val scheduler = factory.scheduler
factory.setOverwriteExistingJobs(true)
//https://stackoverflow.com/questions/39673572/spring-quartz-scheduler-race-condition
factory.setTransactionManager(JdbcTransactionManager())
rescheduleTriggers(triggers, scheduler)
scheduler.start()
return scheduler
}

private fun rescheduleTriggers(triggers: List<Trigger>, scheduler: Scheduler) {
triggers.forEach {
if (!scheduler.checkExists(it.key)) {
scheduler.scheduleJob(it)
} else {
scheduler.rescheduleJob(it.key, it)
}
}
}
}

```
Add a listener to the scheduler:
```kotlin
@Component
@Profile("quartz")
class JobListenerConfig(
private val schedulerFactory: SchedulerFactoryBean,
private val jobListener: JobListener
) {
@PostConstruct
fun addListener() {
schedulerFactory.scheduler.listenerManager.addJobListener(jobListener, KeyMatcher.keyEquals(jobKey("SomeJob")))
}
}

```

And now the most important - the logic of processing the execution of our job with listener:
```kotlin
@Profile("quartz")
class JobListener(
//can be obtained from the execution context, but it can also be injected
private val scheduler: Scheduler,
private val triggers: List<Trigger>
): JobListenerSupport() {

private lateinit var triggerCronMap: Map<String, String>

@PostConstruct
fun post(){
//there will be no recovery triggers , only our self-written ones
triggerCronMap = triggers.associate {
it.key.name to (it as CronTrigger).cronExpression
}
}

override fun getName(): String {
return "myJobListener"
}


override fun jobToBeExecuted(context: JobExecutionContext) {
log.info("Job: ${context.jobDetail.key.name} ready to start by trigger: ${context.trigger.key.name}")
}


override fun jobWasExecuted(context: JobExecutionContext, jobException: JobExecutionException?) {
//you can use context.mergedJobDataMap
val dataMap = context.trigger.jobDataMap
val count = if (dataMap["count"] != null) dataMap.getIntValue("count") else {
dataMap.putAsString("count", 1)
1
}
//in the if block, you can add the condition && !context.trigger.key.name.startsWith("recover_") - in this case, the scheduler will not restart recover triggers if they fall during execution
if (jobException != null ){
if (count < 3) {
log.warn("Job: ${context.jobDetail.key.name} filed while execution. Restart attempts count: $count ")
val oldTrigger = context.trigger
var newTriggerName = context.trigger.key.name + "_retry"
//in case such a trigger already exists
context.scheduler.getTriggersOfJob(context.jobDetail.key)
.map { it.key.name }
.takeIf { it.contains(newTriggerName) }
?.apply { newTriggerName += "_retry" }
val newTrigger = TriggerBuilder.newTrigger()
.forJob(context.jobDetail)
.withIdentity(newTriggerName, context.trigger.key.group)
//create a simple trigger that should be fired in 5 minutes * restart attempts
.startAt(Date.from(Instant.now().plus((5 * count).toLong(), ChronoUnit.MINUTES)))
.usingJobData("count", count + 1 )
.build()
val date = scheduler.rescheduleJob(oldTrigger.key, newTrigger)
log.warn("Rescheduling trigger: ${oldTrigger.key} to trigger: ${newTrigger.key}")
} else {
log.warn("The maximum number of restarts has been reached. Restart attempts: $count")
recheduleWithDefaultTrigger(context)
}
} else if (count > 1) {
recheduleWithDefaultTrigger(context)
}
else {
log.info("Job: ${context.jobDetail.key.name} completed successfully")
}
context.scheduler.getTriggersOfJob(context.trigger.jobKey).forEach {
log.info("Trigger with key: ${it.key} for job: ${context.trigger.jobKey.name} will start at ${it.nextFireTime ?: it.startTime}")
}
}

private fun recheduleWithDefaultTrigger(context: JobExecutionContext) {
val clone = context.jobDetail.clone() as JobDetail
val defaultTriggerName = context.trigger.key.name.split("_")[0]
//Recovery triggers should not be rescheduled
if (!triggerCronMap.contains(defaultTriggerName)) {
log.warn("This trigger: ${context.trigger.key.name} for job: ${context.trigger.jobKey.name} is not self-written trigger. It can be recovery trigger or whatever. This trigger must not be recheduled.")
return
}
log.warn("Remove all triggers for job: ${context.trigger.jobKey.name} and schedule default trigger for it: $defaultTriggerName")
scheduler.deleteJob(clone.key)
scheduler.addJob(clone, true)
scheduler.scheduleJob(
TriggerBuilder.newTrigger()
.forJob(clone)
.withIdentity(defaultTriggerName)
.withSchedule(CronScheduleBuilder.cronSchedule(triggerCronMap[defaultTriggerName]))
.usingJobData("count", 1)
.startAt(Date.from(Instant.now().plusSeconds(5)))
.build()
)
}
}
```
Last but not least: **application.yaml**
```yaml
spring:
quartz:
job-store-type: jdbc #Database Mode
jdbc:
initialize-schema: never #Do not initialize table structure
properties:
org:
quartz:
scheduler:
instanceId: AUTO #Default hostname and timestamp generate instance ID, which can be any string, but must be the only corresponding qrtz_scheduler_state INSTANCE_NAME field for all dispatchers
#instanceName: clusteredScheduler #quartzScheduler
jobStore:
# a few problems with the two properties below:

[To see links please register here]

&

[To see links please register here]

# class: org.springframework.scheduling.quartz.LocalDataSourceJobStore #Persistence Configuration
driverDelegateClass: org.quartz.impl.jdbcjobstore.PostgreSQLDelegate #We only make database-specific proxies for databases
# useProperties: true #Indicates that JDBC JobStore stores all values in JobDataMaps as strings, so more complex objects can be stored as name-value pairs rather than serialized in BLOB columns.In the long run, this is safer because you avoid serializing non-String classes to BLOB class versions.
tablePrefix: my_quartz.QRTZ_ #Database Table Prefix
misfireThreshold: 60000 #The number of milliseconds the dispatcher will "tolerate" a Trigger to pass its next startup time before being considered a "fire".The default value (if you do not enter this property in the configuration) is 60000 (60 seconds).
clusterCheckinInterval: 5000 #Set the frequency (in milliseconds) of this instance'checkin'* with other instances of the cluster.Affects the speed of detecting failed instances.
isClustered: true #Turn on Clustering
threadPool: #Connection Pool
class: org.quartz.simpl.SimpleThreadPool
threadCount: 3
threadPriority: 1
threadsInheritContextClassLoaderOfInitializingThread: true
```

[Here](

[To see links please register here]

) official scripts for database (use liquibase or flyway)
More information:
[About quartz](

[To see links please register here]

)
[spring boot using quartz in cluster mode](

[To see links please register here]

)
[One more article](

[To see links please register here]

)
[Cluster effectively quartz](

[To see links please register here]

)
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

©0Day  2016 - 2023 | All Rights Reserved.  Made with    for the community. Connected through