ERROR 1047 WSREP has not yet prepared node for application use

Posted 翟海飞

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ERROR 1047 WSREP has not yet prepared node for application use相关的知识,希望对你有一定的参考价值。

解决方法一:

TWO-NODE CLUSTERS

In a two-node cluster, a single-node failure causes the other to stop working.

Situation

You have a cluster composed of only two nodes. One of the nodes leaves the cluster ungracefully. That is, instead of being shut down through init or systemd, it crashes or suffers a loss of network connectivity. The node that remains becomes nonoperational. It remains so until some additional information is provided by a third party, such as a human operator or another node.

If the node remained operational after the other left the cluster ungracefully, there would be the risk that each of the two nodes will think itself as being the Primary Component. To prevent this, the node becomes nonoperational.

Solutions

There are two solutions available to you:

  • You can bootstrap the surviving node to form a new Primary Component, using the pc.boostrap wsrep Provider option. To do so, log into the database client and run the following command:

SET GLOBAL wsrep_provider_options='pc.bootstrap=YES';

This bootstraps the surviving node as a new Primary Component. When the other node comes back online or regains network connectivity with this node, it will initiate a state transfer and catch up with this node.

  • In the event that you want the node to continue to operate, you can use the pc.ignore_sb wsrep Provider option. To do so, log into the database client and run the following command:

SET GLOBAL wsrep_provider_options='pc.ignore_sb=TRUE';

The node resumes processing updates and it will continue to do so, even in the event that it suspects a split-brain situation.

Note Warning: Enabling pc.ignore_sb is dangerous in a multi-master setup, due to the aforementioned risk for split-brain situations. However, it does simplify things in master-slave clusters, (especially in cases where you only use two nodes).

In addition to the solutions provided above, you can avoid the situation entirely using Galera Arbitrator. Galera Arbitrator functions as an odd node in quorum calculations. Meaning that, if you enable Galera Arbitrator on one node in a two-node cluster, that node remains the Primary Component, even if the other node fails or loses network connectivity.

http://galeracluster.com/documentation-webpages/twonode.html


解决方法二:

The likely reason is that your node1 went down ungracefully, or at least node2 thought it did. In this case 2-node cluster reaches a split-brain situation, where the remaining part(s) of the cluster cannot decide whether they are supposed to be the primary component. That's why 2-node clusters are not recommended.

Check the logs of node1 to see if it shut down normally, and if it did, then logs of node2 to see how it perceived the situation. If it saw node1 normal shutdown, it would say something like

[Note] WSREP: forgetting xxxxxxx (tcp://X.X.X.X:XXXX)

etc.; but if it thought the other node was lost, it would be more like

[Note] WSREP: (70f85e74, 'tcp://x.x.x.x:xxxx') turning message relay requesting on, nonlive peers: tcp://X.X.X.X:XXXX

etc.

See http://nirbhay.in/blog/2015/02/split-brain/ for more details and log examples of the split brain situation.

The cheapest way to avoid it is to use Galera arbitrator: http://nirbhay.in/blog/2013/11/what-is-galera-arbitrator/

  • Thanks you! This error appear because I reboot Node1 suddenly. But I only have 2 servers. Can I install 2 Galera Arbitrator on 2 servers to resolve this issue? @elenst –  namdt55555  Nov 17 '16 at 16:13 
  • Technically you can, but it won't help. If you tend to reboot the whole machine, the arbitrator which runs there will go down as well, and you will have the same split-brain, only instead of 1/1 (1 node left, 1 lost), it will be 2/2. If one of your hosts is a high-risk for reboots and another one is more stable, you might consider setting a higher weight to the stable one –  elenst  Nov 17 '16 at 18:18
  • by running SET GLOBAL wsrep_provider_options="pc.weight=3" or something like that. In this case when the "weak" node goes down, the stronger one will know it's still primary. If it so happens that the strong one went down, you can revive the remaining one by running SET GLOBAL wsrep_provider_options='pc.bootstrap=true'. Be careful about not setting both of your nodes to bootstrap though, or you'll end up having two separate clusters. –  elenst  Nov 17 '16 at 18:22


参考:https://stackoverflow.com/questions/40653238/mariadb-galera-error-when-a-node-shutdown-error-1047-wsrep-has-not-yet-prepare

以上是关于ERROR 1047 WSREP has not yet prepared node for application use的主要内容,如果未能解决你的问题,请参考以下文章

jsp报An error has occurred. See error log for more details. Argument not valid错误

error: ‘rclcpp::executor’ has not been declared思考与ROS2的版本号

HHC6003: Error: The file Itircl.dll has not been

启动eclipse时候提示错误Error:Could not create the Java Virtual Machine. Error:A Fatal exception has occurred

svn执行update操作后出现:Error : Previous operation has not finished; run 'cleanup' if it was interr

Swift Thread 1: Fatal error: init(coder:) has not been implemented (调用超级解决方案不起作用)