Navigation

Inscrivez-vous gratuitement
pour pouvoir participer, suivre les réponses en temps réel, voter pour les messages, poser vos propres questions et recevoir la newsletter

  1. #1
    Membre à l'essai
    [MongoDB] Mise en place des Replica Sets pour utiliser les transactions
    Bonjour,

    Pour utiliser les transactions MongoDB avec mon application. je souhaite activer les Replica Sets.
    J'ai une base MongoDB 4.2 sous Docker . Du coup pour activer les replicas, j'utilise ces commandes :

    Code :Sélectionner tout -Visualiser dans une fenêtre à part
    1
    2
    3
    4
    docker exec -i -t docker_db_1 /bin/bash
    mongo
    config = { _id : "rs0", members: [ { _id : 0, host : "db:27017" }] }
    rs.initiate( config )


    ça semble s'activer.... mais après quelques tests, je me rends compte que les transactions ne fonctionnent pas. Des idées ?
    Voici mes logs au démarrage de mongodb :

    Code :Sélectionner tout -Visualiser dans une fenêtre à part
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
     
    db_1         | 2020-03-25T15:09:16.410+0100 I  CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1  11 Sep 2018
    db_1         | 2020-03-25T15:09:16.410+0100 I  CONTROL  [initandlisten] allocator: tcmalloc
    db_1         | 2020-03-25T15:09:16.410+0100 I  CONTROL  [initandlisten] modules: none
    db_1         | 2020-03-25T15:09:16.410+0100 I  CONTROL  [initandlisten] build environment:
    db_1         | 2020-03-25T15:09:16.410+0100 I  CONTROL  [initandlisten]     distmod: ubuntu1804
    db_1         | 2020-03-25T15:09:16.410+0100 I  CONTROL  [initandlisten]     distarch: x86_64
    db_1         | 2020-03-25T15:09:16.410+0100 I  CONTROL  [initandlisten]     target_arch: x86_64
    db_1         | 2020-03-25T15:09:16.410+0100 I  CONTROL  [initandlisten] options: { net: { bindIp: "*" }, replication: { replSet: "rs0" }, security: { authorization: "enabled" } }
    db_1         | 2020-03-25T15:09:16.411+0100 W  STORAGE  [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
    db_1         | 2020-03-25T15:09:16.412+0100 I  STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
    db_1         | 2020-03-25T15:09:16.412+0100 W  STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
    db_1         | 2020-03-25T15:09:16.412+0100 I  STORAGE  [initandlisten]
    db_1         | 2020-03-25T15:09:16.412+0100 I  STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
    db_1         | 2020-03-25T15:09:16.412+0100 I  STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
    db_1         | 2020-03-25T15:09:16.412+0100 I  STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
    db_1         | 2020-03-25T15:09:20.261+0100 I  STORAGE  [initandlisten] WiredTiger message [1585145360:223349][1:0x7f6e3b30db00], txn-recover: Recovering log 391 through 392
    db_1         | 2020-03-25T15:09:21.541+0100 I  STORAGE  [initandlisten] WiredTiger message [1585145361:541045][1:0x7f6e3b30db00], txn-recover: Recovering log 392 through 392
    db_1         | 2020-03-25T15:09:22.923+0100 I  STORAGE  [initandlisten] WiredTiger message [1585145362:923703][1:0x7f6e3b30db00], txn-recover: Main recovery loop: starting at 391/1958912 to 392/256
    db_1         | 2020-03-25T15:09:22.928+0100 I  STORAGE  [initandlisten] WiredTiger message [1585145362:928083][1:0x7f6e3b30db00], txn-recover: Recovering log 391 through 392
    db_1         | 2020-03-25T15:09:23.453+0100 I  STORAGE  [initandlisten] WiredTiger message [1585145363:453663][1:0x7f6e3b30db00], txn-recover: Recovering log 392 through 392
    db_1         | 2020-03-25T15:09:23.993+0100 I  STORAGE  [initandlisten] WiredTiger message [1585145363:993721][1:0x7f6e3b30db00], txn-recover: Set global recovery timestamp: (1585145309, 1)
    db_1         | 2020-03-25T15:09:24.305+0100 I  RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1585145309, 1)
    db_1         | 2020-03-25T15:09:24.400+0100 I  STORAGE  [initandlisten] Starting OplogTruncaterThread local.oplog.rs
    db_1         | 2020-03-25T15:09:24.401+0100 I  STORAGE  [initandlisten] The size storer reports that the oplog contains 25371 records totaling to 7778792 bytes
    db_1         | 2020-03-25T15:09:24.401+0100 I  STORAGE  [initandlisten] Sampling the oplog to determine where to place markers for truncation
    db_1         | 2020-03-25T15:09:24.411+0100 I  STORAGE  [initandlisten] Sampling from the oplog between Mar 24 10:39:59:1 and Mar 25 15:08:29:1 to determine where to place markers for truncation
    db_1         | 2020-03-25T15:09:24.411+0100 I  STORAGE  [initandlisten] Taking 4 samples and assuming that each section of oplog contains approximately 55505 records totaling to 17017927 bytes
    db_1         | 2020-03-25T15:09:24.417+0100 I  STORAGE  [initandlisten] WiredTiger record store oplog processing took 16ms
    db_1         | 2020-03-25T15:09:24.429+0100 I  STORAGE  [initandlisten] Timestamp monitor starting
    db_1         | 2020-03-25T15:09:24.829+0100 I  SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:24.901+0100 I  STORAGE  [initandlisten] Flow Control is enabled on this deployment.
    db_1         | 2020-03-25T15:09:24.910+0100 I  SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:24.921+0100 I  SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:24.967+0100 I  SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:24.968+0100 I  FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
    db_1         | 2020-03-25T15:09:25.130+0100 I  SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:25.131+0100 I  SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:25.138+0100 I  REPL     [initandlisten] Rollback ID is 1
    db_1         | 2020-03-25T15:09:25.162+0100 I  REPL     [initandlisten] Recovering from stable timestamp: Timestamp(1585145309, 1) (top of oplog: { ts: Timestamp(1585145309, 1), t: 1 }, appliedThrough: { ts: Timestamp(0, 0), t: -1 }, TruncateAfter: Timestamp(0, 0))
    db_1         | 2020-03-25T15:09:25.162+0100 I  REPL     [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1585145309, 1)
    db_1         | 2020-03-25T15:09:25.162+0100 I  REPL     [initandlisten] No oplog entries to apply for recovery. Start point is at the top of the oplog.
    db_1         | 2020-03-25T15:09:25.163+0100 I  SHARDING [initandlisten] Marking collection config.transactions as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:25.226+0100 I  SHARDING [initandlisten] Marking collection local.oplog.rs as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:25.282+0100 I  CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
    db_1         | 2020-03-25T15:09:25.302+0100 I  SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:25.315+0100 I  NETWORK  [listener] Listening on /tmp/mongodb-27017.sock
    db_1         | 2020-03-25T15:09:25.316+0100 I  NETWORK  [listener] Listening on 0.0.0.0
    db_1         | 2020-03-25T15:09:25.317+0100 I  NETWORK  [listener] waiting for connections on port 27017
    db_1         | 2020-03-25T15:09:25.325+0100 I  CONTROL  [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured
    db_1         | 2020-03-25T15:09:25.594+0100 I  FTDC     [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
    db_1         | 2020-03-25T15:09:25.626+0100 I  REPL     [replexec-0] New replica set config in use: { _id: "rs0", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "db:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5e7b34dc7be03b1fb9e4c4ed') } }
    db_1         | 2020-03-25T15:09:25.626+0100 I  REPL     [replexec-0] This node is db:27017 in the config
    db_1         | 2020-03-25T15:09:25.626+0100 I  REPL     [replexec-0] transition to STARTUP2 from STARTUP
    db_1         | 2020-03-25T15:09:25.627+0100 I  REPL     [replexec-0] Starting replication storage threads
    db_1         | 2020-03-25T15:09:25.631+0100 I  REPL     [replexec-0] transition to RECOVERING from STARTUP2
    db_1         | 2020-03-25T15:09:25.633+0100 I  REPL     [replexec-0] Starting replication fetcher thread
    db_1         | 2020-03-25T15:09:25.634+0100 I  REPL     [replexec-0] Starting replication applier thread
    db_1         | 2020-03-25T15:09:25.634+0100 I  REPL     [replexec-0] Starting replication reporter thread
    db_1         | 2020-03-25T15:09:25.655+0100 I  REPL     [rsSync-0] Starting oplog application
    db_1         | 2020-03-25T15:09:25.657+0100 I  REPL     [rsSync-0] transition to SECONDARY from RECOVERING
    db_1         | 2020-03-25T15:09:25.660+0100 I  ELECTION [rsSync-0] conducting a dry run election to see if we could be elected. current term: 10
    db_1         | 2020-03-25T15:09:25.665+0100 I  ELECTION [replexec-0] dry election run succeeded, running for election in term 11
    db_1         | 2020-03-25T15:09:25.720+0100 I  ELECTION [replexec-0] election succeeded, assuming primary role in term 11
    db_1         | 2020-03-25T15:09:25.720+0100 I  REPL     [replexec-0] transition to PRIMARY from SECONDARY
    db_1         | 2020-03-25T15:09:25.721+0100 I  REPL     [replexec-0] Resetting sync source to empty, which was :27017
    db_1         | 2020-03-25T15:09:25.721+0100 I  REPL     [replexec-0] Entering primary catch-up mode.
    db_1         | 2020-03-25T15:09:25.721+0100 I  REPL     [replexec-0] Exited primary catch-up mode.
    db_1         | 2020-03-25T15:09:25.721+0100 I  REPL     [replexec-0] Stopping replication producer
    db_1         | 2020-03-25T15:09:26.682+0100 I  REPL     [ReplBatcher] Oplog buffer has been drained in term 11
    db_1         | 2020-03-25T15:09:26.699+0100 I  REPL     [RstlKillOpThread] Starting to kill user operations
    db_1         | 2020-03-25T15:09:26.699+0100 I  REPL     [RstlKillOpThread] Stopped killing user operations
    db_1         | 2020-03-25T15:09:26.699+0100 I  REPL     [RstlKillOpThread] State transition ops metrics: { lastStateTransition: "stepUp", userOpsKilled: 0, userOpsRunning: 0 }
    db_1         | 2020-03-25T15:09:26.722+0100 I  REPL     [rsSync-0] transition to primary complete; database writes are now permitted
    db_1         | 2020-03-25T15:09:26.724+0100 I  SHARDING [monitoring-keys-for-HMAC] Marking collection admin.system.keys as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:52.749+0100 I  NETWORK  [listener] connection accepted from 192.168.48.6:50044 #1 (1 connection now open)
    db_1         | 2020-03-25T15:09:52.857+0100 I  NETWORK  [conn1] received client metadata from 192.168.48.6:50044 conn1: { driver: { name: "mongo-java-driver|legacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "4.9.0-4-amd64" }, platform: "Java/Oracle Corporation/11.0.2+9-Debian-3bpo91" }
    db_1         | 2020-03-25T15:09:53.172+0100 I  NETWORK  [listener] connection accepted from 192.168.48.6:50046 #2 (2 connections now open)
    db_1         | 2020-03-25T15:09:53.174+0100 I  NETWORK  [conn2] received client metadata from 192.168.48.6:50046 conn2: { driver: { name: "mongo-java-driver|legacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "4.9.0-4-amd64" }, platform: "Java/Oracle Corporation/11.0.2+9-Debian-3bpo91" }
    db_1         | 2020-03-25T15:09:53.179+0100 I  SHARDING [conn2] Marking collection admin.system.users as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:53.914+0100 I  ACCESS   [conn2] Successfully authenticated as principal batchcorunning on corunning from client 192.168.48.6:50046
    db_1         | 2020-03-25T15:09:54.135+0100 I  SHARDING [conn2] Marking collection corunning.quartz__locks as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:54.203+0100 I  COMMAND  [conn2] CMD: dropIndexes corunning.quartz__jobs: "keyName_1_keyGroup_1"
    db_1         | 2020-03-25T15:09:54.205+0100 I  SHARDING [conn2] Marking collection corunning.quartz__jobs as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:54.664+0100 I  SHARDING [conn2] Marking collection corunning.quartz__triggers as collection version: <unsharded>
    db_1         | 2020-03-25T15:09:55.264+0100 I  NETWORK  [listener] connection accepted from 192.168.48.6:50048 #3 (3 connections now open)
    db_1         | 2020-03-25T15:09:55.268+0100 I  NETWORK  [conn3] received client metadata from 192.168.48.6:50048 conn3: { driver: { name: "mongo-java-driver|legacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "4.9.0-4-amd64" }, platform: "Java/Oracle Corporation/11.0.2+9-Debian-3bpo91" }
    db_1         | 2020-03-25T15:10:01.245+0100 I  NETWORK  [listener] connection accepted from 192.168.48.6:50050 #4 (4 connections now open)
    db_1         | 2020-03-25T15:10:01.268+0100 I  NETWORK  [conn4] received client metadata from 192.168.48.6:50050 conn4: { driver: { name: "mongo-java-driver|legacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "amd64", version: "4.9.0-4-amd64" }, platform: "Java/Oracle Corporation/11.0.2+9-Debian-3bpo91" }
    db_1         | 2020-03-25T15:10:01.774+0100 I  ACCESS   [conn4] Successfully authenticated as principal batchcorunning on corunning from client 192.168.48.6:50050


    Du côté de mon DockerFile :

    Code :Sélectionner tout -Visualiser dans une fenêtre à part
    1
    2
    3
    4
    5
    6
    7
    8
    9
     db:
         image: mongo:4.2.3
         restart: always
         ports:
               - "27777:27017"
         volumes:
               - '/etc/timezone:/etc/timezone:ro'
               - '/etc/localtime:/etc/localtime:ro'
         command: --auth --replSet rs0



    le résultat de la commande rs.config() :
    Code :Sélectionner tout -Visualiser dans une fenêtre à part
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
     
    rs0<img src="images/smilies/icon_razz.gif" border="0" alt="" title=":P" class="inlineimg" />RIMARY> rs.config()
     
    {
     
    	"_id" : "rs0",
     
    	"version" : 1,
     
    	"protocolVersion" : NumberLong(1),
     
    	"writeConcernMajorityJournalDefault" : true,
     
    	"members" : [
     
    		{
     
    			"_id" : 0,
     
    			"host" : "db:27017",
     
    			"arbiterOnly" : false,
     
    			"buildIndexes" : true,
     
    			"hidden" : false,
     
    			"priority" : 1,
     
    			"tags" : {
     
     
     
    			},
     
    			"slaveDelay" : NumberLong(0),
     
    			"votes" : 1
     
    		}
     
    	],
     
    	"settings" : {
     
    		"chainingAllowed" : true,
     
    		"heartbeatIntervalMillis" : 2000,
     
    		"heartbeatTimeoutSecs" : 10,
     
    		"electionTimeoutMillis" : 10000,
     
    		"catchUpTimeoutMillis" : -1,
     
    		"catchUpTakeoverDelayMillis" : 30000,
     
    		"getLastErrorModes" : {
     
     
     
    		},
     
    		"getLastErrorDefaults" : {
     
    			"w" : 1,
     
    			"wtimeout" : 0
     
    		},
     
    		"replicaSetId" : ObjectId("5e7b34dc7be03b1fb9e4c4ed")
     
    	}
     
    }


    Et le résultat de la commande rs.status() :
    Code :Sélectionner tout -Visualiser dans une fenêtre à part
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
     
    rs0<img src="images/smilies/icon_razz.gif" border="0" alt="" title=":P" class="inlineimg" />RIMARY> rs.status()
     
    {
     
    	"set" : "rs0",
     
    	"date" : ISODate("2020-03-25T15:06:25.861Z"),
     
    	"myState" : 1,
     
    	"term" : NumberLong(12),
     
    	"syncingTo" : "",
     
    	"syncSourceHost" : "",
     
    	"syncSourceId" : -1,
     
    	"heartbeatIntervalMillis" : NumberLong(2000),
     
    	"majorityVoteCount" : 1,
     
    	"writeMajorityCount" : 1,
     
    	"optimes" : {
     
    		"lastCommittedOpTime" : {
     
    			"ts" : Timestamp(1585148779, 1),
     
    			"t" : NumberLong(12)
     
    		},
     
    		"lastCommittedWallTime" : ISODate("2020-03-25T15:06:19.871Z"),
     
    		"readConcernMajorityOpTime" : {
     
    			"ts" : Timestamp(1585148779, 1),
     
    			"t" : NumberLong(12)
     
    		},
     
    		"readConcernMajorityWallTime" : ISODate("2020-03-25T15:06:19.871Z"),
     
    		"appliedOpTime" : {
     
    			"ts" : Timestamp(1585148779, 1),
     
    			"t" : NumberLong(12)
     
    		},
     
    		"durableOpTime" : {
     
    			"ts" : Timestamp(1585148779, 1),
     
    			"t" : NumberLong(12)
     
    		},
     
    		"lastAppliedWallTime" : ISODate("2020-03-25T15:06:19.871Z"),
     
    		"lastDurableWallTime" : ISODate("2020-03-25T15:06:19.871Z")
     
    	},
     
    	"lastStableRecoveryTimestamp" : Timestamp(1585148740, 3),
     
    	"lastStableCheckpointTimestamp" : Timestamp(1585148740, 3),
     
    	"electionCandidateMetrics" : {
     
    		"lastElectionReason" : "electionTimeout",
     
    		"lastElectionDate" : ISODate("2020-03-25T15:04:58.534Z"),
     
    		"electionTerm" : NumberLong(12),
     
    		"lastCommittedOpTimeAtElection" : {
     
    			"ts" : Timestamp(0, 0),
     
    			"t" : NumberLong(-1)
     
    		},
     
    		"lastSeenOpTimeAtElection" : {
     
    			"ts" : Timestamp(1585148620, 1),
     
    			"t" : NumberLong(11)
     
    		},
     
    		"numVotesNeeded" : 1,
     
    		"priorityAtElection" : 1,
     
    		"electionTimeoutMillis" : NumberLong(10000),
     
    		"newTermStartDate" : ISODate("2020-03-25T15:04:59.580Z"),
     
    		"wMajorityWriteAvailabilityDate" : ISODate("2020-03-25T15:04:59.678Z")
     
    	},
     
    	"members" : [
     
    		{
     
    			"_id" : 0,
     
    			"name" : "db:27017",
     
    			"health" : 1,
     
    			"state" : 1,
     
    			"stateStr" : "PRIMARY",
     
    			"uptime" : 98,
     
    			"optime" : {
     
    				"ts" : Timestamp(1585148779, 1),
     
    				"t" : NumberLong(12)
     
    			},
     
    			"optimeDate" : ISODate("2020-03-25T15:06:19Z"),
     
    			"syncingTo" : "",
     
    			"syncSourceHost" : "",
     
    			"syncSourceId" : -1,
     
    			"infoMessage" : "could not find member to sync from",
     
    			"electionTime" : Timestamp(1585148698, 1),
     
    			"electionDate" : ISODate("2020-03-25T15:04:58Z"),
     
    			"configVersion" : 1,
     
    			"self" : true,
     
    			"lastHeartbeatMessage" : ""
     
    		}
     
    	],
     
    	"ok" : 1,
     
    	"$clusterTime" : {
     
    		"clusterTime" : Timestamp(1585148779, 1),
     
    		"signature" : {
     
    			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
     
    			"keyId" : NumberLong(0)
     
    		}
     
    	},
     
    	"operationTime" : Timestamp(1585148779, 1)
     
    }


    Merci car je sèche ....

  2. #2
    Membre du Club
    Hello,

    J'ai l'impression qu'il a des notions qui se mélangent...

    Les replicaset servent à répliquer les données. Ceci apporte essentiellement de la haute disponibilité, de la tolérance aux pannes.
    Un replicaset à un node, ça n'a pas de sens.

    Les transactions permettent d'annuler ou de valider des modifications sur plusieurs documents.

    Les transactions distribuées permettent de faire la même chose, sur des replicasets ou des shards (mongodb 4.2).


    Sur le principe, pour monter un replicaset a 3 nœuds, ici sur la même machine:

    • Création d'une paire de clefs pour la communication entre les nœuds (c'est le plus simple)
    • Création des chemins de données et de log, pour les 3 nœuds
    • Construction de 3 fichiers de config, avec la même paire de clefs, le même replica, des chemins de données différents, des port différents, et des chemins de log différents
    • Démarrage des 3 mongod
    • Sur le nœud 1, initialisation du replicaset (rs.initiate())
    • Sur le nœud 1, création d'un administrateur, qui a le rôle root
    • Sur le nœud 1, reconnexion avec cet utilisateur
    • Sur le nœud 1, demande du statut du replicaset (rs.status())
    • puis ajout des deux autres nœuds au replicatset (rs.add)
    • Rq: l'utilisateur créé sur le nœud 1 est automatiquement répliqué sur les deux autres nœuds


    mlaunch peut faire tout ça simplement, https://www.mongodb.com/blog/post/in...oducing-mtools

    Oui mais... à mon avis, vaut mieux comprendre ce que l'on fait. La MongoDB University a un cours gratuit, avec mise en pratique, pour aborder la gestion d'un cluster: https://university.mongodb.com/courses/M103/about

    Passer de ce modèle (N mongod sur une machine) à docker/compose est assez aisé.


    Have fun